252 research outputs found

    Application of particle swarm optimisation with backward calculation to solve a fuzzy multi-objective supply chain master planning model

    Full text link
    Traditionally, supply chain planning problems consider variables with uncertainty associated with uncontrolled factors. These factors have been normally modelled by complex methodologies where the seeking solution process often presents high scale of difficulty. This work presents the fuzzy set theory as a tool to model uncertainty in supply chain planning problems and proposes the particle swarm optimisation (PSO) metaheuristics technique combined with a backward calculation as a solution method. The aim of this combination is to present a simple effective method to model uncertainty, while good quality solutions are obtained with metaheuristics due to its capacity to find them with satisfactory computational performance in complex problems, in a relatively short time period.This research is partly supported by the Spanish Ministry of Economy and Competitiveness projects 'Methods and models for operations planning and order management in supply chains characterised by uncertainty in production due to the lack of product uniformity' (PLANGES-FHP) (Ref. DPI2011-23597) and 'Operations design and Management of Global Supply Chains' (GLOBOP) (Ref. DPI2012-38061-C02-01); by the project funded by the Polytechnic University of Valencia entitled 'Quantitative Models for the Design of Socially Responsible Supply Chains under Uncertainty Conditions. Application of Solution Strategies based on Hybrid Metaheuristics' (PAID-06-12); and by the Ministry of Science, Technology and Telecommunications, government of Costa Rica (MICITT), through the incentive program of the National Council for Scientific and Technological Research (CONICIT) (contract No FI-132-2011).Grillo Espinoza, H.; Peidro Payá, D.; Alemany Díaz, MDM.; Mula, J. (2015). Application of particle swarm optimisation with backward calculation to solve a fuzzy multi-objective supply chain master planning model. International Journal of Bio-Inspired Computation. 7(3):157-169. https://doi.org/10.1504/IJBIC.2015.069557S1571697

    Designing a Multistage Supply Chain in Cross-Stage Reverse Logistics Environments: Application of Particle Swarm Optimization Algorithms

    Get PDF
    This study designed a cross-stage reverse logistics course for defective products so that damaged products generated in downstream partners can be directly returned to upstream partners throughout the stages of a supply chain for rework and maintenance. To solve this reverse supply chain design problem, an optimal cross-stage reverse logistics mathematical model was developed. In addition, we developed a genetic algorithm (GA) and three particle swarm optimization (PSO) algorithms: the inertia weight method (PSOA_IWM), VMax method (PSOA_VMM), and constriction factor method (PSOA_CFM), which we employed to find solutions to support this mathematical model. Finally, a real case and five simulative cases with different scopes were used to compare the execution times, convergence times, and objective function values of the four algorithms used to validate the model proposed in this study. Regarding system execution time, the GA consumed more time than the other three PSOs did. Regarding objective function value, the GA, PSOA_IWM, and PSOA_CFM could obtain a lower convergence value than PSOA_VMM could. Finally, PSOA_IWM demonstrated a faster convergence speed than PSOA_VMM, PSOA_CFM, and the GA did

    Investigation of domestic level EV chargers in the Distribution Network: An Assessment and mitigation solution

    Get PDF
    This research focuses on the electrification of the transport sector. Such electrification could potentially pose challenges to the distribution system operator (DSO) in terms of reliability, power quality and cost-effective implementation. This thesis contributes to both, an Electrical Vehicle (EV) load demand profiling and advanced use of reactive power compensation (D-STATCOM) to facilitate flexible and secure network operation. The main aim of this research is to investigate the planning and operation of low voltage distribution networks (LVDN) with increasing electrical vehicles (EVs) proliferation and the effects of higher demand charging systems. This work is based on two different independent strands of research. Firstly, the thesis illustrates how the flexibility and composition of aggregated EVs demand can be obtained with very limited information available. Once the composition of demand is available, future energy scenarios are analysed in respect to the impact of higher EVs charging rates on single phase connections at LV distribution network level. A novel planning model based on energy scenario simulations suitable for the utilization of existing assets is developed. The proposed framework can provide probabilistic risk assessment of power quality (PQ) variations that may arise due to the proliferation of significant numbers of EVs chargers. Monte Carlo (MC) based simulation is applied in this regard. This probabilistic approach is used to estimate the likely impact of EVs chargers against the extreme-case scenarios. Secondly, in relation to increased EVs penetration, dynamic reactive power reserve management through network voltage control is considered. In this regard, a generic distribution static synchronous compensator (D-STATCOM) model is adapted to achieve network voltage stability. The main emphasis is on a generic D-STATCOM modelling technique, where each individual EV charging is considered through a probability density function that is inclusive of dynamic D-STATCOM support. It demonstrates how optimal techniques can consider the demand flexibility at each bus to meet the requirement of network operator while maintaining the relevant steady state and/or dynamic performance indicators (voltage level) of the network. The results show that reactive power compensation through D-STATCOM, in the context of EVs integration, can provide continuous voltage support and thereby facilitate 90% penetration of network customers with EV connections at a normal EV charging rate (3.68 kW). The results are improved by using optimal power flow. The results suggest, if fast charging (up to 11 kW) is employed, up to 50% of network EV customers can be accommodated by utilising the optimal planning approach. During the case study, it is observed that the transformer loading is increased significantly in the presence of D-STATCOM. The transformer loading reaches approximately up to 300%, in one of the contingencies at 11 kW EV charging, so transformer upgrading is still required. Three-phase connected DSTATCOM is normally used by the DSO to control power quality issues in the network. Although, to maintain voltage level at each individual phase with three-phase connected device is not possible. So, single-phase connected D-STATCOM is used to control the voltage at each individual phase. Single-phase connected D-STATCOM is able maintain the voltage level at each individual phase at 1 p.u. This research will be of interest to the DSO, as it will provide an insight to the issues associated with higher penetration of EV chargers, present in the realization of a sustainable transport electrification agenda

    Online Modeling and Tuning of Parallel Stream Processing Systems

    Get PDF
    Writing performant computer programs is hard. Code for high performance applications is profiled, tweaked, and re-factored for months specifically for the hardware for which it is to run. Consumer application code doesn\u27t get the benefit of endless massaging that benefits high performance code, even though heterogeneous processor environments are beginning to resemble those in more performance oriented arenas. This thesis offers a path to performant, parallel code (through stream processing) which is tuned online and automatically adapts to the environment it is given. This approach has the potential to reduce the tuning costs associated with high performance code and brings the benefit of performance tuning to consumer applications where otherwise it would be cost prohibitive. This thesis introduces a stream processing library and multiple techniques to enable its online modeling and tuning. Stream processing (also termed data-flow programming) is a compute paradigm that views an application as a set of logical kernels connected via communications links or streams. Stream processing is increasingly used by computational-x and x-informatics fields (e.g., biology, astrophysics) where the focus is on safe and fast parallelization of specific big-data applications. A major advantage of stream processing is that it enables parallelization without necessitating manual end-user management of non-deterministic behavior often characteristic of more traditional parallel processing methods. Many big-data and high performance applications involve high throughput processing, necessitating usage of many parallel compute kernels on several compute cores. Optimizing the orchestration of kernels has been the focus of much theoretical and empirical modeling work. Purely theoretical parallel programming models can fail when the assumptions implicit within the model are mis-matched with reality (i.e., the model is incorrectly applied). Often it is unclear if the assumptions are actually being met, even when verified under controlled conditions. Full empirical optimization solves this problem by extensively searching the range of likely configurations under native operating conditions. This, however, is expensive in both time and energy. For large, massively parallel systems, even deciding which modeling paradigm to use is often prohibitively expensive and unfortunately transient (with workload and hardware). In an ideal world, a parallel run-time will re-optimize an application continuously to match its environment, with little additional overhead. This work presents methods aimed at doing just that through low overhead instrumentation, modeling, and optimization. Online optimization provides a good trade-off between static optimization and online heuristics. To enable online optimization, modeling decisions must be fast and relatively accurate. Online modeling and optimization of a stream processing system first requires the existence of a stream processing framework that is amenable to the intended type of dynamic manipulation. To fill this void, we developed the RaftLib C++ template library, which enables usage of the stream processing paradigm for C++ applications (it is the run-time which is the basis of almost all the work within this dissertation). An application topology is specified by the user, however almost everything else is optimizable by the run-time. RaftLib takes advantage of the knowledge gained during the design of several prior streaming languages (notably Auto-Pipe). The resultant framework enables online migration of tasks, auto-parallelization, online buffer-reallocation, and other useful dynamic behaviors that were not available in many previous stream processing systems. Several benchmark applications have been designed to assess the performance gains through our approaches and compare performance to other leading stream processing frameworks. Information is essential to any modeling task, to that end a low-overhead instrumentation framework has been developed which is both dynamic and adaptive. Discovering a fast and relatively optimal configuration for a stream processing application often necessitates solving for buffer sizes within a finite capacity queueing network. We show that a generalized gain/loss network flow model can bootstrap the process under certain conditions. Any modeling effort, requires that a model be selected; often a highly manual task, involving many expensive operations. This dissertation demonstrates that machine learning methods (such as a support vector machine) can successfully select models at run-time for a streaming application. The full set of approaches are incorporated into the open source RaftLib framework

    Recent Trends in Communication Networks

    Get PDF
    In recent years there has been many developments in communication technology. This has greatly enhanced the computing power of small handheld resource-constrained mobile devices. Different generations of communication technology have evolved. This had led to new research for communication of large volumes of data in different transmission media and the design of different communication protocols. Another direction of research concerns the secure and error-free communication between the sender and receiver despite the risk of the presence of an eavesdropper. For the communication requirement of a huge amount of multimedia streaming data, a lot of research has been carried out in the design of proper overlay networks. The book addresses new research techniques that have evolved to handle these challenges

    Applications of Power Electronics:Volume 1

    Get PDF

    PV Charging and Storage for Electric Vehicles

    Get PDF
    Electric vehicles are only ‘green’ as long as the source of electricity is ‘green’ as well. At the same time, renewable power production suffers from diurnal and seasonal variations, creating the need for energy storage technology. Moreover, overloading and voltage problems are expected in the distributed network due to the high penetration of distributed generation and increased power demand from the charging of electric vehicles. The energy and mobility transition hence calls for novel technological innovations in the field of sustainable electric mobility powered from renewable energy. This Special Issue focuses on recent advances in technology for PV charging and storage for electric vehicles

    Micro (Wind) Generation: \u27Urban Resource Potential & Impact on Distribution Network Power Quality\u27

    Get PDF
    Of the forms of renewable energy available, wind energy is at the forefront of the European (and Irish) green initiative with wind farms supplying a significant proportion of electrical energy demand. This type of distributed generation (DG) represents a ‘paradigm shift’ towards increased decentralisation of energy supply. However, because of the distance of most DG from urban areas where demand is greatest, there is a loss of efficiency. The solution, placing wind energy systems in urban areas, faces significant challenges. The complexities associated with the urban terrain include planning, surface heterogeneity that reduces the available wind resource and technology obstacles to extracting and distributing wind energy. Yet, if a renewable solution to increasing energy demand is to be achieved, energy conversion systems where populations are concentrated, that is cities, must be considered. This study is based on two independent strands of research into: low voltage (LV) power flow and modelling the urban wind resource. The urban wind resource is considered by employing a physically-based empirical model to link wind observations at a conventional meteorological site to those acquired at urban sites. The approach is based on urban climate research that has examined the effects of varying surface roughness on the wind-field above buildings. The development of the model is based on observational data acquired at two locations across Dublin representing an urban and sub-urban site. At each, detailed wind information is recorded at a height about 1.5 times the average height of surrounding buildings. These observations are linked to data gathered at a conventional meteorological station located at Dublin Airport, which is outside the city. These observations are linked through boundary-layer meteorological theory that accounts for surface roughness. The resulting model has sufficient accuracy to assess the wind resource at these sites and allow us to assess the potential for micro–turbine energy generation. One of the obstacles to assessing this potential wind resource is our lack of understanding of how turbulence within urban environments affects turbine productivity. This research uses two statistical approaches to examine the effect of turbulence intensity on wind turbine performance. The first approach is an adaptation of a model originally derived to quantify the degradation of power performance of a wind turbine using the Gaussian probability distribution to simulate turbulence. The second approach involves a novel application of the Weibull Distribution, a widely accepted means to probabilistically describe wind speed and its variation. On the technological side, incorporating wind power into an urban distribution network requires power flow analysis to investigate the power quality issues, which are principally associated with imbalance of voltage on distribution lines and voltage rise. Distribution networks that incorporate LV consumers must accommodate a highly unbalanced load structure and the need for grounding network between the consumer and grid operator (TN-C-S earthing). In this regard, an asymmetrical 3-phase (plus neutral) power flow must be solved to represent the range of issues for the consumer and the network as the number of wind-energy systems are integrated onto the distribution network. The focus in this research is integrating micro/small generation, which can be installed in parallel with LV consumer connections. After initial investigations of a representative Irish distribution network, a section of an actual distribution network is modelled and a number of power flow algorithms are considered. Subsequently, an algorithm based on the admittance matrix of a network is identified as the optimal approach. The modelling thereby refers to a 4-wire representation of a suburban distribution network within Dublin city, Ireland, which incorporates consumer connections at single-phase (230V-N). Investigations relating to a range of network issues are considered. More specifically, network issues considered include voltage unbalance/rise and the network neutral earth voltage (NEV) for increasing levels of micro/small wind generation technologies with respect to a modelled urban wind resource. The associated power flow analysis is further considered in terms of the turbulence modelling to ascertain how turbulence impinges on the network voltage/voltage-unbalance constraints

    Power Quality

    Get PDF
    Electrical power is becoming one of the most dominant factors in our society. Power generation, transmission, distribution and usage are undergoing signifi cant changes that will aff ect the electrical quality and performance needs of our 21st century industry. One major aspect of electrical power is its quality and stability – or so called Power Quality. The view on Power Quality did change over the past few years. It seems that Power Quality is becoming a more important term in the academic world dealing with electrical power, and it is becoming more visible in all areas of commerce and industry, because of the ever increasing industry automation using sensitive electrical equipment on one hand and due to the dramatic change of our global electrical infrastructure on the other. For the past century, grid stability was maintained with a limited amount of major generators that have a large amount of rotational inertia. And the rate of change of phase angle is slow. Unfortunately, this does not work anymore with renewable energy sources adding their share to the grid like wind turbines or PV modules. Although the basic idea to use renewable energies is great and will be our path into the next century, it comes with a curse for the power grid as power fl ow stability will suff er. It is not only the source side that is about to change. We have also seen signifi cant changes on the load side as well. Industry is using machines and electrical products such as AC drives or PLCs that are sensitive to the slightest change of power quality, and we at home use more and more electrical products with switching power supplies or starting to plug in our electric cars to charge batt eries. In addition, many of us have begun installing our own distributed generation systems on our rooft ops using the latest solar panels. So we did look for a way to address this severe impact on our distribution network. To match supply and demand, we are about to create a new, intelligent and self-healing electric power infrastructure. The Smart Grid. The basic idea is to maintain the necessary balance between generators and loads on a grid. In other words, to make sure we have a good grid balance at all times. But the key question that you should ask yourself is: Does it also improve Power Quality? Probably not! Further on, the way how Power Quality is measured is going to be changed. Traditionally, each country had its own Power Quality standards and defi ned its own power quality instrument requirements. But more and more international harmonization efforts can be seen. Such as IEC 61000-4-30, which is an excellent standard that ensures that all compliant power quality instruments, regardless of manufacturer, will produce of measurement instruments so that they can also be used in volume applications and even directly embedded into sensitive loads. But work still has to be done. We still use Power Quality standards that have been writt en decades ago and don’t match today’s technology any more, such as fl icker standards that use parameters that have been defi ned by the behavior of 60-watt incandescent light bulbs, which are becoming extinct. Almost all experts are in agreement - although we will see an improvement in metering and control of the power fl ow, Power Quality will suff er. This book will give an overview of how power quality might impact our lives today and tomorrow, introduce new ways to monitor power quality and inform us about interesting possibilities to mitigate power quality problems. Regardless of any enhancements of the power grid, “Power Quality is just compatibility” like my good old friend and teacher Alex McEachern used to say. Power Quality will always remain an economic compromise between supply and load. The power available on the grid must be suffi ciently clean for the loads to operate correctly, and the loads must be suffi ciently strong to tolerate normal disturbances on the grid
    corecore