35 research outputs found
Recommended from our members
Multi particle swarm optimisation algorithm applied to supervisory power control systems
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University LondonPower quality problems come in numerous forms (commonly spikes, surges, sags, outages and harmonics) and their resolution can cost from a few hundred to millions of pounds, depending on the size and type of problem experienced by the power network. They are commonly experienced as burnt-out motors, corrupt data on hard drives, unnecessary downtime and increased maintenance costs. In order to minimise such events, the network can be monitored and controlled with a specific control regime to deal with particular faults. This study developed a control and Optimisation system and applied it to the stability of electrical power networks using artificial intelligence techniques. An intelligent controller was designed to control and optimise simulated models for electrical system power stability. Fuzzy logic controller controlled the power generation, while particle swarm Optimisation (PSO) techniques optimised the system’s power quality in normal operation conditions and after faults. Different types of PSO were tested, then a multi-swarm (M-PSO) system was developed to give better Optimisation results in terms of accuracy and convergence speed.. The developed Optimisation algorithm was tested on seven benchmarks and compared to the other types of single PSOs.
The developed controller and Optimisation algorithm was applied to power system stability control. Two power electrical network models were used (with two and four generators), controlled by fuzzy logic controllers tuned using the Optimisation algorithm. The system selected the optimal controller parameters automatically for normal and fault conditions during the operation of the power network. Multi objective cost function was used based on minimising the recovery time, overshoot, and steady state error. A supervisory control layer was introduced to detect and diagnose faults then apply the correct controller parameters. Different fault scenarios were used to test the system performance. The results indicate the great potential of the proposed power system stabiliser as a superior tool compared to conventional control systems
Particle Swarm Optimization
Particle swarm optimization (PSO) is a population based stochastic optimization technique influenced by the social behavior of bird flocking or fish schooling.PSO shares many similarities with evolutionary computation techniques such as Genetic Algorithms (GA). The system is initialized with a population of random solutions and searches for optima by updating generations. However, unlike GA, PSO has no evolution operators such as crossover and mutation. In PSO, the potential solutions, called particles, fly through the problem space by following the current optimum particles. This book represents the contributions of the top researchers in this field and will serve as a valuable tool for professionals in this interdisciplinary field
A hybrid recursive least square pso based algorithm for harmonic estimation
The presence of harmonics shapes the performance of a power system. Hence harmonic estimation of paramount importance while considering a power system network. Harmonics is an important parameter for power system control and enhance power system relaying, power quality monitoring, operation and control of electrical equipments. The increase in nonlinear load and time varying device causes periodic distortion of voltage and current waveforms which is not desirable electrical network. Due to this nonlinear load or device, the voltage and current waveform contains sinusoidal component other than the fundamental frequency which is known as the harmonics. Some existing techniques of harmonics estimation are Least Square (LS), Least Mean Square (LMS),Recursive Least Square (RLS), Kalman Filtering (KF), Soft Computing Techniques such as Artificial neural networks (ANN),Least square algorithm, Recursive least square algorithm, Genetic algorithm(GA) ,Particle swarm optimization(PSO) ,Ant colony optimization, Bacterial foraging optimization(BFO), Gravitational search algorithm, Cooker search algorithm ,Water drop algorithm, Bat algorithm etc. Though LMS algorithm has low computational complexity and good tracking ability ,but it provides poor estimation performance due to its poor convergence rate as the adaptation step-size is fixed. In case of RLS suitable initial choice of covariance matrix and gain leading to faster convergence. The thesis also proposed a hybrid recurvive least square pso based algorithm for power system harmonics estimation. In this thesis, the proposed hybrid approaches topower system harmonics estimation first optimize the unknown parametersof the regressor of the input power system signal using Particle swarm optimization and then RLS are applied for achieving faster convergence in estimating harmonics of distorted signal
Exact and non-exact procedures for solving the response time variability problem (RTVP)
Premi extraordinari doctorat curs 2009-2010, àmbit d’Enginyeria IndustrialCuando se ha de compartir un recurso entre demandas (de productos, clientes, tareas, etc.) competitivas que requieren una atención regular, es importante programar el derecho al acceso del recurso de alguna forma justa de manera que cada producto, cliente o tarea reciba un acceso al recurso proporcional a su demanda relativa al total de las demandas competitivas. Este tipo de problemas de secuenciación pueden ser generalizados bajo el siguiente esquema. Dados n símbolos, cada uno con demanda di (i = 1,...,n), se ha de generar una secuencia justa o regular donde cada símbolo aparezca di veces. No existe una definición universal de justicia, ya que puede haber varias métricas razonables para medirla según el problema específico considerado. En el Problema de Variabilidad en el Tiempo de Respuesta, o Response Time Variability Problem (RTVP) en inglés, la injusticia o irregularidad de una secuencia es medida como la suma, para todos los símbolos, de sus variabilidades en las distancias en que las copias de cada símbolo son secuenciados. Así, el objetivo del RTVP es encontrar la secuencia que minimice la variabilidad total. En otras palabras, el objetivo del RTVP es minimizar la variabilidad de los instantes en que los productos, clientes o trabajos reciben el recurso necesario. Este problema aparece en una amplia variedad de situaciones de la vida real; entre otras, secuenciación en líneas de modelo-mixto bajo just-in-time (JIT), en asignación de recursos en sistemas computacionales multi-hilo como sistemas operativos, servidores de red y aplicaciones mutimedia, en el mantenimiento periódico de maquinaria, en la recolección de basura, en la programación de comerciales en televisión y en el diseño de rutas para agentes comerciales con múltiples visitas a un mismo cliente. En algunos de estos problemas la regularidad no es una propiedad deseable por sí misma, si no que ayuda a minimizar costes. De hecho, cuando los costes son proporcionales al cuadrado de las distancias, el problema de minimizar costes y el RTVP son equivalentes. El RTVP es muy difícil de resolver (se ha demostrado que es NP-hard). El tamaño de las instancias del RTVP que pueden ser resueltas óptimamente con el mejor método exacto existente en la literatura tiene un límite práctico de 40 unidades. Por otro lado, los métodos no exactos propuestos en la literatura para resolver instancias mayores consisten en heurísticos simples que obtienen soluciones rápidamente, pero cuya calidad puede ser mejorada. Por tanto, los métodos de resolución existentes en la literatura son insuficientes. El principal objetivo de esta tesis es mejorar la resolución del RTVP. Este objetivo se divide en los dos siguientes subobjetivos : 1) aumentar el tamaño de las instancias del RTVP que puedan ser resueltas de forma óptima en un tiempo de computación práctico, y 2) obtener de forma eficiente soluciones lo más cercanas a las óptimas para instancias mayores. Además, la tesis tiene los dos siguientes objetivos secundarios: a) investigar el uso de metaheurísticos bajo el esquema de los hiper-heurísticos, y b) diseñar un procedimiento sistemático y automático para fijar los valores adecuados a los parámetros de los algoritmos. Se han desarrollado diversos métodos para alcanzar los objetivos anteriormente descritos. Para la resolución del RTVP se ha diseñado un método exacto basado en la técnica branch and bound y el tamaño de las instancias que pueden resolverse en un tiempo práctico se ha incrementado a 55 unidades. Para instancias mayores, se han diseñado métodos heurísticos, metaheurísticos e hiper-heurísticos, los cuales pueden obtener soluciones óptimas o casi óptimas rápidamente. Además, se ha propuesto un procedimiento sistemático y automático para tunear parámetros que aprovecha las ventajas de dos procedimientos existentes (el algoritmo Nelder & Mead y CALIBRA).When a resource must be shared between competing demands (of products, clients, jobs, etc.) that require regular attention, it is important to schedule the access right to the resource in some fair manner so that each product, client or job receives a share of the resource that is proportional to its demand relative to the total of the competing demands. These types of sequencing problems can be generalized under the following scheme. Given n symbols, each one with demand di (i = 1,...,n), a fair or regular sequence must be built in which each symbol appears di times. There is not a universal definition of fairness, as several reasonable metrics to measure it can be defined according to the specific considered problem. In the Response Time Variability Problem (RTVP), the unfairness or the irregularity of a sequence is measured by the sum, for all symbols, of their variabilities in the positions at which the copies of each symbol are sequenced. Thus, the objective of the RTVP is to find the sequence that minimises the total variability. In other words, the RTVP objective is to minimise the variability in the instants at which products, clients or jobs receive the necessary resource. This problem appears in a broad range of real-world areas. Applications include sequencing of mixed-model assembly lines under just-in-time (JIT), resource allocation in computer multi-threaded systems such as operating systems, network servers and media-based applications, periodic machine maintenance, waste collection, scheduling commercial videotapes for television and designing of salespeople's routes with multiple visits, among others. In some of these problems the regularity is not a property desirable by itself, but it helps to minimise costs. In fact, when the costs are proportional to the square of the distances, the problem of minimising costs and the RTVP are equivalent. The RTVP is very hard to be solved (it has been demonstrated that it is NP-hard). The size of the RTVP instances that can be solved optimally with the best exact method existing in the literature has a practical limit of 40 units. On the other hand, the non-exact methods proposed in the literature to solve larger instances are simple heuristics that obtains solutions quickly, but the quality of the obtained solutions can be improved. Thus, the solution methods existing in the literature are not enough to solve the RTVP. The main objective of this thesis is to improve the resolution of the RTVP. This objective is split in the two following sub-objectives: 1) to increase the size of the RTVP instances that can be solved optimally in a practical computing time; and 2) to obtain efficiently near-optimal solutions for larger instances. Moreover, the thesis has the following two secondary objectives: a) to research the use of metaheuristics under the scheme of hyper-heuristics, and b) to design a systematic, hands-off procedure to set the suitable values of the algorithm parameters. To achieve the aforementioned objectives, several procedures have been developed. To solve the RTVP an exact procedure based on the branch and bound technique has been designed and the size of the instances that can be solved in a practical time has been increased to 55 units. For larger instances, heuristic, heuristic, metaheuristic and hyper-heuristic procedures have been designed, which can obtain optimal or near-optimal solutions quickly. Moreover, a systematic, hands-off fine-tuning method that takes advantage of the two existing ones (Nelder & Mead algorithm and CALIBRA) has been proposed.Award-winningPostprint (published version
Design Optimization of Composite Deployable Bridge Systems Using Hybrid Meta-heuristic Methods for Rapid Post-disaster Mobility
Recent decades have witnessed an increase in the transportation infrastructure damage caused by natural disasters such as earthquakes, high winds, floods, as well as man-made disasters. Such damages result in a disruption to the transportation infrastructure network; hence, limit the post-disaster relief operations. This led to the exigency of developing and using effective deployable bridge systems for rapid post-disaster mobility while minimizing the weight to capacity ratio. Recent researches for assessments of mobile bridging requirements concluded that current deployable metallic bridge systems are prone to their service life, unable to meet the increase in vehicle design loads, and any trials for the structures’ strengthening will sacrifice the ease of mobility. Therefore, this research focuses on developing a lightweight deployable bridge system using composite laminates for lightweight bridging in the aftermath of natural disaster. The research investigates the structural design optimization for composite laminate deployable bridge systems, as well as the design, development and testing of composite sandwich core sections that act as the compression bearing element in a deployable bridge treadway structure.
The thesis is organized into two parts. The first part includes a new improved particle swarm meta-heuristic approach capable of effectively optimizing deployable bridge systems. The developed approach is extended to modify the technique for discrete design of composite laminates and maximum strength design of composite sandwich core sections. The second part focuses on developing, experimentally testing and numerically investigating the performance of different sandwich core configurations that will be used as the compression bearing element in a deployable fibre-reinforced polymer (FRP) bridge girder.
The first part investigated different optimization algorithms used for structural optimization. The uncertainty in the effectiveness of the available methods to handle complex structural models emphasized the need to develop an enhanced version of Particle Swarm Optimizer (PSO) without performing multiple operations using different techniques. The new technique implements a better emulation for the attraction and repulsion behavior of the swarm. The new algorithm is called Controlled Diversity Particle Swarm Optimizer (CD-PSO). The algorithm improved the performance of the classical PSO in terms of solution stability, quality, convergence rate and computational time. The CD-PSO is then hybridized with the Response Surface Methodology (RSM) to redirect the swarm search for probing feasible solutions in hyperspace using only the design parameters of strong influence on the objective function. This is triggered when the algorithm fails to obtain good solutions using CD-PSO. The performance of CD-PSO is tested on benchmark structures and compared to others in the literature. Consequently, both techniques, CD-, and hybrid CD-PSO are examined for the minimum weight design of large-scale deployable bridge structure. Furthermore, a discrete version of the algorithm is created to handle the discrete nature of the composite laminate sandwich core design.
The second part focuses on achieving an effective composite deployable bridge system, this is realized through maximizing shear strength, compression strength, and stiffness designs of light-weight composite sandwich cores of the treadway bridge’s compression deck. Different composite sandwich cores are investigated and their progressive failure is numerically evaluated. The performance of the sandwich cores is experimentally tested in terms of flatwise compressive strength, edgewise compressive strength and shear strength capacities. Further, the cores’ compression strength and shear strength capacities are numerically simulated and the results are validated with the experimental work. Based on the numerical and experimental tests findings, the sandwich cores plate properties are quantified for future implementation in optimized scaled deployable bridge treadway
Comparing turnaround leadership in a rural church and in schools.
This qualitative study sought to illuminate successful practices of a turnaround leader in a rural church that are applicable cross-contextually, so as to inform the leadership efforts of various organizations seeking to reproduce organizational renewal on a wide-scale basis. Utilizing the principles of case study research, the researcher conducted participant observations, mined documents, and interviewed the pastor, three part-time staff members, and 24 members of a rural congregation in a South-central Kentucky congregation that had grown 289% in active membership over the last 14 years. Proceeding with the assumption that leaders can, by the practice of specific, intentional behaviors, positively impact the ability of a congregation to reverse its path and experience turnaround, and seeking to illuminate those behaviors, this study was guided by the following research questions: (a) In a rural church that has experienced revitalization ( organizational turnaround ), how do the pastor and congregants perceive the experience? (b) How do they perceive the characteristics and behaviors of the pastor as catalysts in this transformation? (c) What leadership principles of successful turnaround church efforts can be extracted from their experiences that are comparable to those reported in the literature on school revitalization efforts? The data from the study revealed that members did not recall specific events that led to turnaround so clearly as they recalled unity and harmony; this was contrasted to the period of turmoil and split immediately before the turnaround and the initial, devastating split it endured 20 years prior. They did not describe events as much as they did their pastor who helped bring peace and a culture that was conducive to revitalization. With perhaps some credit to a youth program that was started under a previous pastor, and reinstituted by under the turnaround pastor\u27s leadership, responses to the question of precipitants to growth essentially described their pastor\u27s personality-a) a people person and b) a detail person-and five intentional behaviors-a) developing a community presence, b) providing quality, meaningful worship, c) educating and equipping members, d) providing a vision for the future, and e) empowering and mobilizing the laity. This study revealed consistent themes that existed in the theoretical framework on schools provided by Kouzes and Posner (1987) as well as in the church and school turnaround lore. These findings propagate the notion that turnaround leaders often bear striking resemblances to one another, exhibiting many of the same personal character traits and intentional behaviors. These findings also suggest that turnaround leadership is not so much a product of individual, charismatic leadership as it the product of consistent, sustained attention to sound leadership behaviors
Structural performance evaluation and optimization through cyber-physical systems using substructure real-time hybrid simulation
Natural hazards continue to demonstrate the vulnerability of civil infrastructure worldwide. Engineers are dedicated to improving structural performance against natural hazards with improved design codes and computational tools. These improvements are often driven by experiments. Experimental testing not only enables the prediction of structural responses under those dynamic loads but also provide a reliable way to investigate new solutions for hazard mitigation. Common experimental techniques in structural engineering include quasi-static testing, shake table testing, and hybrid simulation. In recent years, real-time hybrid simulation (RTHS) has emerged as a powerful alternative to drive improvements in civil infrastructure as the entire structure’s dynamic performance is captured with reduced experimental requirements. In addition, RTHS provides an attractive opportunity to investigate the optimal performance of complex structures or components against multi-hazards by embedding it in an optimization framework. RTHS stands to accelerate advancements in civil engineering, in particular for designing new structural systems or devices in a performance-based design environment.
This dissertation focuses on the use of cyber-physical systems (CPS) to evaluate structural performance and achieve optimal designs for seismic protection. This dissertation presents systematic studies on the development and validation of the dynamic substructuring RTHS technique using shake tables, novel techniques in increasing RTHS stability by introducing artificial damping to an under-actuated physical specimen, and the optimal design of the structure or supplemental control devices for seismic protection through a cyber-physical substructure optimization (CPSO) framework using substructure RTHS
INTELLIGENT OPTIMIZATION OF INTERLINE POWER FLOW CONTROLLER IN TRANSMISSION SYSTEM
Flexible AC Transmission system (FACTS) controllers are widely accepted worldwide to provide benefits in increasing power transfer capability and maximizeing the use of the existing transmission networks. A new generation of FACTS controllers, particularly the Interline Power Flow Controller (IPFC) based on voltage source converter (VSC) provides fast power flow control flexibility. The IPFC with its unique capability of power flow management is significantly extended to control power flows of multi-lines or a sub network. Generally IPFC employs two or more VSCs connected together with DC links and each converter provides series compensation for the selected line of the transmission system. Optimal power flow is an important factor in power system operation, planning and control. In this thesis, the mathematical model of IPFC together with the modified Newton-Raphson method for power flow is used to derive the optimal parameters (the magnitude and voltage angles) of VSCs of IPFC. The optimal parameters are derived to minimize the transmission line losses using three intelligent optimization techniques, namely Particle Swarm Optimization (PSO), Genetic Algorithm (GA) and Simulated Annealing (SA). The proposed methods are applied using MATLAB 7.6 and tested on IEEE 14-bus and 30-bus bench mark power systems. The optimal parameters of IPFC, the voltage profile and the transmission line losses of the bench mark power systems are derived from the simulations. The simulation results obtained with PSO technique are compared with those obtained by other two optimization techniques. The thesis also covers the basic principles and operation of IPFC, the modified Newton-Raphson power flow method and an overview of the three intelligent optimization techniques used in this thesis. The results prove the efficacy of the three intelligent methods for the optimization of IPFC parameters and minimization of transmission line losses
Recommended from our members
Optimal distributed generation planning based on NSGA-II and MATPOWER
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonThe UK and the world are moving away from central energy resource to distributed generation (DG) in order to lower carbon emissions. Renewable energy resources comprise a big percentage of DGs and their optimal integration to the grid is the main attempt of planning/developing projects with in electricity network.
Feasibility and thorough conceptual design studies are required in the planning/development process as most of the electricity networks are designed in a few decades ago, not considering the challenges imposed by DGs. As an example, the issue of voltage rise during steady state condition becomes problematic when large amount of dispersed generation is connected to a distribution network. The efficient transfer of power out or toward the network is not currently an efficient solution due to phase angle difference of each network supplied by DGs. Therefore optimisation algorithms have been developed over the last decade in order to do the planning purpose optimally to alleviate the unwanted effects of DGs. Robustness of proposed algorithms in the literature has been only partially addressed due to challenges of power system problems such multi-objective nature of them. In this work, the contribution provides a novel platform for optimum integration of distributed generations in power grid in terms of their site and size. The work provides a modified non-sorting genetic algorithm (NSGA) based on MATPOWER (for power flow calculation) in order to find a fast and reliable solution to optimum planning. The proposed multi-objective planning tool, presents a fast convergence method for the case studies, incorporating the economic and technical aspects of DG planning from the planner‟s perspective. The proposed method is novel in terms of power flow constraints handling and can be applied to other energy planning problems
Exact and non-exact procedures for solving the response time variability problem (RTVP)
Cuando se ha de compartir un recurso entre demandas (de productos, clientes, tareas, etc.) competitivas que requieren una atención regular, es importante programar el derecho al acceso del recurso de alguna forma justa de manera que cada producto, cliente o tarea reciba un acceso al recurso proporcional a su demanda relativa al total de las demandas competitivas. Este tipo de problemas de secuenciación pueden ser generalizados bajo el siguiente esquema. Dados n símbolos, cada uno con demanda di (i = 1,...,n), se ha de generar una secuencia justa o regular donde cada símbolo aparezca di veces. No existe una definición universal de justicia, ya que puede haber varias métricas razonables para medirla según el problema específico considerado. En el Problema de Variabilidad en el Tiempo de Respuesta, o Response Time Variability Problem (RTVP) en inglés, la injusticia o irregularidad de una secuencia es medida como la suma, para todos los símbolos, de sus variabilidades en las distancias en que las copias de cada símbolo son secuenciados. Así, el objetivo del RTVP es encontrar la secuencia que minimice la variabilidad total. En otras palabras, el objetivo del RTVP es minimizar la variabilidad de los instantes en que los productos, clientes o trabajos reciben el recurso necesario. Este problema aparece en una amplia variedad de situaciones de la vida real; entre otras, secuenciación en líneas de modelo-mixto bajo just-in-time (JIT), en asignación de recursos en sistemas computacionales multi-hilo como sistemas operativos, servidores de red y aplicaciones mutimedia, en el mantenimiento periódico de maquinaria, en la recolección de basura, en la programación de comerciales en televisión y en el diseño de rutas para agentes comerciales con múltiples visitas a un mismo cliente. En algunos de estos problemas la regularidad no es una propiedad deseable por sí misma, si no que ayuda a minimizar costes. De hecho, cuando los costes son proporcionales al cuadrado de las distancias, el problema de minimizar costes y el RTVP son equivalentes. El RTVP es muy difícil de resolver (se ha demostrado que es NP-hard). El tamaño de las instancias del RTVP que pueden ser resueltas óptimamente con el mejor método exacto existente en la literatura tiene un límite práctico de 40 unidades. Por otro lado, los métodos no exactos propuestos en la literatura para resolver instancias mayores consisten en heurísticos simples que obtienen soluciones rápidamente, pero cuya calidad puede ser mejorada. Por tanto, los métodos de resolución existentes en la literatura son insuficientes. El principal objetivo de esta tesis es mejorar la resolución del RTVP. Este objetivo se divide en los dos siguientes subobjetivos : 1) aumentar el tamaño de las instancias del RTVP que puedan ser resueltas de forma óptima en un tiempo de computación práctico, y 2) obtener de forma eficiente soluciones lo más cercanas a las óptimas para instancias mayores. Además, la tesis tiene los dos siguientes objetivos secundarios: a) investigar el uso de metaheurísticos bajo el esquema de los hiper-heurísticos, y b) diseñar un procedimiento sistemático y automático para fijar los valores adecuados a los parámetros de los algoritmos. Se han desarrollado diversos métodos para alcanzar los objetivos anteriormente descritos. Para la resolución del RTVP se ha diseñado un método exacto basado en la técnica branch and bound y el tamaño de las instancias que pueden resolverse en un tiempo práctico se ha incrementado a 55 unidades. Para instancias mayores, se han diseñado métodos heurísticos, metaheurísticos e hiper-heurísticos, los cuales pueden obtener soluciones óptimas o casi óptimas rápidamente. Además, se ha propuesto un procedimiento sistemático y automático para tunear parámetros que aprovecha las ventajas de dos procedimientos existentes (el algoritmo Nelder & Mead y CALIBRA).When a resource must be shared between competing demands (of products, clients, jobs, etc.) that require regular attention, it is important to schedule the access right to the resource in some fair manner so that each product, client or job receives a share of the resource that is proportional to its demand relative to the total of the competing demands. These types of sequencing problems can be generalized under the following scheme. Given n symbols, each one with demand di (i = 1,...,n), a fair or regular sequence must be built in which each symbol appears di times. There is not a universal definition of fairness, as several reasonable metrics to measure it can be defined according to the specific considered problem. In the Response Time Variability Problem (RTVP), the unfairness or the irregularity of a sequence is measured by the sum, for all symbols, of their variabilities in the positions at which the copies of each symbol are sequenced. Thus, the objective of the RTVP is to find the sequence that minimises the total variability. In other words, the RTVP objective is to minimise the variability in the instants at which products, clients or jobs receive the necessary resource. This problem appears in a broad range of real-world areas. Applications include sequencing of mixed-model assembly lines under just-in-time (JIT), resource allocation in computer multi-threaded systems such as operating systems, network servers and media-based applications, periodic machine maintenance, waste collection, scheduling commercial videotapes for television and designing of salespeople's routes with multiple visits, among others. In some of these problems the regularity is not a property desirable by itself, but it helps to minimise costs. In fact, when the costs are proportional to the square of the distances, the problem of minimising costs and the RTVP are equivalent. The RTVP is very hard to be solved (it has been demonstrated that it is NP-hard). The size of the RTVP instances that can be solved optimally with the best exact method existing in the literature has a practical limit of 40 units. On the other hand, the non-exact methods proposed in the literature to solve larger instances are simple heuristics that obtains solutions quickly, but the quality of the obtained solutions can be improved. Thus, the solution methods existing in the literature are not enough to solve the RTVP. The main objective of this thesis is to improve the resolution of the RTVP. This objective is split in the two following sub-objectives: 1) to increase the size of the RTVP instances that can be solved optimally in a practical computing time; and 2) to obtain efficiently near-optimal solutions for larger instances. Moreover, the thesis has the following two secondary objectives: a) to research the use of metaheuristics under the scheme of hyper-heuristics, and b) to design a systematic, hands-off procedure to set the suitable values of the algorithm parameters. To achieve the aforementioned objectives, several procedures have been developed. To solve the RTVP an exact procedure based on the branch and bound technique has been designed and the size of the instances that can be solved in a practical time has been increased to 55 units. For larger instances, heuristic, heuristic, metaheuristic and hyper-heuristic procedures have been designed, which can obtain optimal or near-optimal solutions quickly. Moreover, a systematic, hands-off fine-tuning method that takes advantage of the two existing ones (Nelder & Mead algorithm and CALIBRA) has been proposed