2,880 research outputs found

    Capuchin Search Particle Swarm Optimization (CS-PSO) based Optimized Approach to Improve the QoS Provisioning in Cloud Computing Environment

    Get PDF
    This review introduces the methods for further enhancing resource assignment in distributed computing situations taking into account QoS restrictions. While resource distribution typically affects the quality of service (QoS) of cloud organizations, QoS constraints such as response time, throughput, hold-up time, and makespan are key factors to take into account. The approach makes use of a methodology from the Capuchin Search Particle Large Number Improvement (CS-PSO) apparatus to smooth out resource designation while taking QoS constraints into account. Throughput, reaction time, makespan, holding time, and resource use are just a few of the objectives the approach works on. The method divides the resources in an optimum way using the K-medoids batching scheme. During batching, projects are divided into two-pack assembles, and the resource segment method is enhanced to obtain the optimal configuration. The exploratory association makes use of the JAVA device and the GWA-T-12 Bitbrains dataset for replication. The outrageous worth advancement problem of the multivariable capacity is addressed using the superior calculation. The simulation findings demonstrate that the core (Cloud Molecule Multitude Improvement, CPSO) computation during 500 ages has not reached assembly repeatedly, repeatedly, repeatedly, and repeatedly, respectively.The connection analysis reveals that the developed model outperforms the state-of-the-art approaches. Generally speaking, this approach provides significant areas of strength for a successful procedure for improving resource designation in distributed processing conditions and can be applied to address a variety of resource segment challenges, such as virtual machine setup, work arranging, and resource allocation. Because of this, the capuchin search molecule enhancement algorithm (CSPSO) ensures the success of the improvement measures, such as minimal streamlined polynomial math, rapid consolidation speed, high productivity, and a wide variety of people

    The effect of load on agent-based algorithms for distributed task allocation

    Get PDF
    Multi-agent algorithms inspired by the division of labour in social insects and by markets, are applied to a constrained problem of distributed task allocation. The efficiency (average number of tasks performed), the flexibility (ability to react to changes in the environment), and the sensitivity to load (ability to cope with differing demands) are investigated in both static and dynamic environments. A hybrid algorithm combining both approaches, is shown to exhibit improved efficiency and robustness. We employ nature inspired particle swarm optimisation to obtain optimised parameters for all algorithms in a range of representative environments. Although results are obtained for large population sizes to avoid finite size effects, the influence of population size on the performance is also analysed. From a theoretical point of view, we analyse the causes of efficiency loss, derive theoretical upper bounds for the efficiency, and compare these with the experimental results

    A Novel Workload Allocation Strategy for Batch Jobs

    Get PDF
    The distribution of computational tasks across a diverse set of geographically distributed heterogeneous resources is a critical issue in the realisation of true computational grids. Conventionally, workload allocation algorithms are divided into static and dynamic approaches. Whilst dynamic approaches frequently outperform static schemes, they usually require the collection and processing of detailed system information at frequent intervals - a task that can be both time consuming and unreliable in the real-world. This paper introduces a novel workload allocation algorithm for optimally distributing the workload produced by the arrival of batches of jobs. Results show that, for the arrival of batches of jobs, this workload allocation algorithm outperforms other commonly used algorithms in the static case. A hybrid scheduling approach (using this workload allocation algorithm), where information about the speed of computational resources is inferred from previously completed jobs, is then introduced and the efficiency of this approach demonstrated using a real world computational grid. These results are compared to the same workload allocation algorithm used in the static case and it can be seen that this hybrid approach comprehensively outperforms the static approach

    Optimisation of Mobile Communication Networks - OMCO NET

    Get PDF
    The mini conference “Optimisation of Mobile Communication Networks” focuses on advanced methods for search and optimisation applied to wireless communication networks. It is sponsored by Research & Enterprise Fund Southampton Solent University. The conference strives to widen knowledge on advanced search methods capable of optimisation of wireless communications networks. The aim is to provide a forum for exchange of recent knowledge, new ideas and trends in this progressive and challenging area. The conference will popularise new successful approaches on resolving hard tasks such as minimisation of transmit power, cooperative and optimal routing

    A Hybrid Optimization Algorithm for Efficient Virtual Machine Migration and Task Scheduling Using a Cloud-Based Adaptive Multi-Agent Deep Deterministic Policy Gradient Technique

    Get PDF
    This To achieve optimal system performance in the quickly developing field of cloud computing, efficient resource management—which includes accurate job scheduling and optimized Virtual Machine (VM) migration—is essential. The Adaptive Multi-Agent System with Deep Deterministic Policy Gradient (AMS-DDPG) Algorithm is used in this study to propose a cutting-edge hybrid optimization algorithm for effective virtual machine migration and task scheduling. An sophisticated combination of the War Strategy Optimization (WSO) and Rat Swarm Optimizer (RSO) algorithms, the Iterative Concept of War and Rat Swarm (ICWRS) algorithm is the foundation of this technique. Notably, ICWRS optimizes the system with an amazing 93% accuracy, especially for load balancing, job scheduling, and virtual machine migration. The VM migration and task scheduling flexibility and efficiency are greatly improved by the AMS-DDPG technology, which uses a powerful combination of deterministic policy gradient and deep reinforcement learning. By assuring the best possible resource allocation, the Adaptive Multi-Agent System method enhances decision-making even more. Performance in cloud-based virtualized systems is significantly enhanced by our hybrid method, which combines deep learning and multi-agent coordination. Extensive tests that include a detailed comparison with conventional techniques verify the effectiveness of the suggested strategy. As a consequence, our hybrid optimization approach is successful. The findings show significant improvements in system efficiency, shorter job completion times, and optimum resource utilization. Cloud-based systems have unrealized potential for synergistic optimization, as shown by the integration of ICWRS inside the AMS-DDPG framework. Enabling a high-performing and sustainable cloud computing infrastructure that can adapt to the changing needs of modern computing paradigms is made possible by this strategic resource allocation, which is attained via careful computational utilization

    A comprehensive survey on cultural algorithms

    Get PDF
    Peer reviewedPostprin

    Exact and non-exact procedures for solving the response time variability problem (RTVP)

    Get PDF
    Premi extraordinari doctorat curs 2009-2010, àmbit d’Enginyeria IndustrialCuando se ha de compartir un recurso entre demandas (de productos, clientes, tareas, etc.) competitivas que requieren una atención regular, es importante programar el derecho al acceso del recurso de alguna forma justa de manera que cada producto, cliente o tarea reciba un acceso al recurso proporcional a su demanda relativa al total de las demandas competitivas. Este tipo de problemas de secuenciación pueden ser generalizados bajo el siguiente esquema. Dados n símbolos, cada uno con demanda di (i = 1,...,n), se ha de generar una secuencia justa o regular donde cada símbolo aparezca di veces. No existe una definición universal de justicia, ya que puede haber varias métricas razonables para medirla según el problema específico considerado. En el Problema de Variabilidad en el Tiempo de Respuesta, o Response Time Variability Problem (RTVP) en inglés, la injusticia o irregularidad de una secuencia es medida como la suma, para todos los símbolos, de sus variabilidades en las distancias en que las copias de cada símbolo son secuenciados. Así, el objetivo del RTVP es encontrar la secuencia que minimice la variabilidad total. En otras palabras, el objetivo del RTVP es minimizar la variabilidad de los instantes en que los productos, clientes o trabajos reciben el recurso necesario. Este problema aparece en una amplia variedad de situaciones de la vida real; entre otras, secuenciación en líneas de modelo-mixto bajo just-in-time (JIT), en asignación de recursos en sistemas computacionales multi-hilo como sistemas operativos, servidores de red y aplicaciones mutimedia, en el mantenimiento periódico de maquinaria, en la recolección de basura, en la programación de comerciales en televisión y en el diseño de rutas para agentes comerciales con múltiples visitas a un mismo cliente. En algunos de estos problemas la regularidad no es una propiedad deseable por sí misma, si no que ayuda a minimizar costes. De hecho, cuando los costes son proporcionales al cuadrado de las distancias, el problema de minimizar costes y el RTVP son equivalentes. El RTVP es muy difícil de resolver (se ha demostrado que es NP-hard). El tamaño de las instancias del RTVP que pueden ser resueltas óptimamente con el mejor método exacto existente en la literatura tiene un límite práctico de 40 unidades. Por otro lado, los métodos no exactos propuestos en la literatura para resolver instancias mayores consisten en heurísticos simples que obtienen soluciones rápidamente, pero cuya calidad puede ser mejorada. Por tanto, los métodos de resolución existentes en la literatura son insuficientes. El principal objetivo de esta tesis es mejorar la resolución del RTVP. Este objetivo se divide en los dos siguientes subobjetivos : 1) aumentar el tamaño de las instancias del RTVP que puedan ser resueltas de forma óptima en un tiempo de computación práctico, y 2) obtener de forma eficiente soluciones lo más cercanas a las óptimas para instancias mayores. Además, la tesis tiene los dos siguientes objetivos secundarios: a) investigar el uso de metaheurísticos bajo el esquema de los hiper-heurísticos, y b) diseñar un procedimiento sistemático y automático para fijar los valores adecuados a los parámetros de los algoritmos. Se han desarrollado diversos métodos para alcanzar los objetivos anteriormente descritos. Para la resolución del RTVP se ha diseñado un método exacto basado en la técnica branch and bound y el tamaño de las instancias que pueden resolverse en un tiempo práctico se ha incrementado a 55 unidades. Para instancias mayores, se han diseñado métodos heurísticos, metaheurísticos e hiper-heurísticos, los cuales pueden obtener soluciones óptimas o casi óptimas rápidamente. Además, se ha propuesto un procedimiento sistemático y automático para tunear parámetros que aprovecha las ventajas de dos procedimientos existentes (el algoritmo Nelder & Mead y CALIBRA).When a resource must be shared between competing demands (of products, clients, jobs, etc.) that require regular attention, it is important to schedule the access right to the resource in some fair manner so that each product, client or job receives a share of the resource that is proportional to its demand relative to the total of the competing demands. These types of sequencing problems can be generalized under the following scheme. Given n symbols, each one with demand di (i = 1,...,n), a fair or regular sequence must be built in which each symbol appears di times. There is not a universal definition of fairness, as several reasonable metrics to measure it can be defined according to the specific considered problem. In the Response Time Variability Problem (RTVP), the unfairness or the irregularity of a sequence is measured by the sum, for all symbols, of their variabilities in the positions at which the copies of each symbol are sequenced. Thus, the objective of the RTVP is to find the sequence that minimises the total variability. In other words, the RTVP objective is to minimise the variability in the instants at which products, clients or jobs receive the necessary resource. This problem appears in a broad range of real-world areas. Applications include sequencing of mixed-model assembly lines under just-in-time (JIT), resource allocation in computer multi-threaded systems such as operating systems, network servers and media-based applications, periodic machine maintenance, waste collection, scheduling commercial videotapes for television and designing of salespeople's routes with multiple visits, among others. In some of these problems the regularity is not a property desirable by itself, but it helps to minimise costs. In fact, when the costs are proportional to the square of the distances, the problem of minimising costs and the RTVP are equivalent. The RTVP is very hard to be solved (it has been demonstrated that it is NP-hard). The size of the RTVP instances that can be solved optimally with the best exact method existing in the literature has a practical limit of 40 units. On the other hand, the non-exact methods proposed in the literature to solve larger instances are simple heuristics that obtains solutions quickly, but the quality of the obtained solutions can be improved. Thus, the solution methods existing in the literature are not enough to solve the RTVP. The main objective of this thesis is to improve the resolution of the RTVP. This objective is split in the two following sub-objectives: 1) to increase the size of the RTVP instances that can be solved optimally in a practical computing time; and 2) to obtain efficiently near-optimal solutions for larger instances. Moreover, the thesis has the following two secondary objectives: a) to research the use of metaheuristics under the scheme of hyper-heuristics, and b) to design a systematic, hands-off procedure to set the suitable values of the algorithm parameters. To achieve the aforementioned objectives, several procedures have been developed. To solve the RTVP an exact procedure based on the branch and bound technique has been designed and the size of the instances that can be solved in a practical time has been increased to 55 units. For larger instances, heuristic, heuristic, metaheuristic and hyper-heuristic procedures have been designed, which can obtain optimal or near-optimal solutions quickly. Moreover, a systematic, hands-off fine-tuning method that takes advantage of the two existing ones (Nelder & Mead algorithm and CALIBRA) has been proposed.Award-winningPostprint (published version

    A Review of Wireless Sensor Networks with Cognitive Radio Techniques and Applications

    Get PDF
    The advent of Wireless Sensor Networks (WSNs) has inspired various sciences and telecommunication with its applications, there is a growing demand for robust methodologies that can ensure extended lifetime. Sensor nodes are small equipment which may hold less electrical energy and preserve it until they reach the destination of the network. The main concern is supposed to carry out sensor routing process along with transferring information. Choosing the best route for transmission in a sensor node is necessary to reach the destination and conserve energy. Clustering in the network is considered to be an effective method for gathering of data and routing through the nodes in wireless sensor networks. The primary requirement is to extend network lifetime by minimizing the consumption of energy. Further integrating cognitive radio technique into sensor networks, that can make smart choices based on knowledge acquisition, reasoning, and information sharing may support the network's complete purposes amid the presence of several limitations and optimal targets. This examination focuses on routing and clustering using metaheuristic techniques and machine learning because these characteristics have a detrimental impact on cognitive radio wireless sensor node lifetime
    corecore