303 research outputs found

    Enhancing the genetic-based scheduling in computational grids by a structured hierarchical population

    Get PDF
    Independent Job Scheduling is one of the most useful versions of scheduling in grid systems. It aims at computing efficient and optimal mapping of jobs and/or applications submitted by independent users to the grid resources. Besides traditional restrictions, mapping of jobs to resources should be computed under high degree of heterogeneity of resources, the large scale and the dynamics of the system. Because of the complexity of the problem, the heuristic and meta-heuristic approaches are the most feasible methods of scheduling in grids due to their ability to deliver high quality solutions in reasonable computing time. One class of such meta-heuristics is Hierarchic Genetic Strategy (HGS). It is defined as a variant of Genetic Algorithms (GAs) which differs from the other genetic methods by its capability of concurrent search of the solution space. In this work, we present an implementation of HGS for Independent Job Scheduling in dynamic grid environments. We consider the bi-objective version of the problem in which makespan and flowtime are simultaneously optimized. Based on our previous work, we improve the HGS scheduling strategy by enhancing its main branching operations. The resulting HGS-based scheduler is evaluated under the heterogeneity, the large scale and dynamics conditions using a grid simulator. The experimental study showed that the HGS implementation outperforms existing GA-based schedulers proposed in the literature.Peer ReviewedPostprint (author's final draft

    Use of genetic algorithms for scheduling jobs in large scale grid applications

    Get PDF
    In this paper we present the implementation of Genetic Algorithms (GA) for job scheduling on computational grids that optimizes the makespan and the total flowtime. Job scheduling on computational grids is a key problem in large scale grid‐based applications for solving complex problems. The aim is to obtain an efficient scheduler able to allocate a large number of jobs originated from large scale applications to grid resources. Several variations for GA operators are examined in order to identify which works best for the problem. To this end we have developed a grid simulator package to generate large and very large size instances of the problem and have used them to study the performance of GA implementation. Through extensive experimenting and fine tuning of parameters we have identified the configuration of operators and parameters that outperforms the existing implementations in the literature for static instances of the problem. The experimental results show the robustness of the implementation, improved performance of static instances compared to reported results in the literature and, finally, a fast reduction of the makespan making thus the scheduler of practical interest for grid environments. Genetinių algoritmų naudojimas kompiuterių tinkluose ir kalendorinis darbų planavimas Santrauka Aprašoma, kaip genetinis algoritmas taikomas darbų trukmėms optimizuoti kalendoriniam darbų planavimui, naudojant kompiuterių, sujungtų į tinklą, išteklius. Kalendorinis darbų planavimas, naudojant kompiuterių tinklą, yra aktuali problema, sprendžiant kompleksines, didelio masto problemas. Autorių tikslas ñ sukurti tokį algoritmą, kuris efektyviausiai paskirstytų teikiamų skaičiuoti darbų srautą į kompiuterių tinklą. Ištirti keli algoritmai, išrinktas geriausias. Sukurtas kompiuterių tinklo darbą imituojantis programinis paketas, jis patikrintas, sprendžiant konkrečius uždavinius. Eksperimentuojant rastas geriausias operatorių ir parametrų derinys, o eksperimento rezultatai atskleidė, jog darbų planavimo laikas sutrumpėjo. First Published Online: 21 Oct 2010 Reikšminiai žodžiai: genetinis algoritmas, kalendorinis darbų planavimas, kompiuterių tinklas, pavyzdžiai, darbų trukmė, laikas

    Genetic and Swarm Algorithms for Optimizing the Control of Building HVAC Systems Using Real Data: A Comparative Study.

    Get PDF
    Buildings consume a considerable amount of electrical energy, the Heating, Ventilation, and Air Conditioning (HVAC) system being the most demanding. Saving energy and maintaining comfort still challenge scientists as they conflict. The control of HVAC systems can be improved by modeling their behavior, which is nonlinear, complex, and dynamic and works in uncertain contexts. Scientific literature shows that Soft Computing techniques require fewer computing resources but at the expense of some controlled accuracy loss. Metaheuristics-search-based algorithms show positive results, although further research will be necessary to resolve new challenging multi-objective optimization problems. This article compares the performance of selected genetic and swarmintelligence- based algorithms with the aim of discerning their capabilities in the field of smart buildings. MOGA, NSGA-II/III, OMOPSO, SMPSO, and Random Search, as benchmarking, are compared in hypervolume, generational distance, ε-indicator, and execution time. Real data from the Building Management System of Teatro Real de Madrid have been used to train a data model used for the multiple objective calculations. The novelty brought by the analysis of the different proposed dynamic optimization algorithms in the transient time of an HVAC system also includes the addition, to the conventional optimization objectives of comfort and energy efficiency, of the coefficient of performance, and of the rate of change in ambient temperature, aiming to extend the equipment lifecycle and minimize the overshooting effect when passing to the steady state. The optimization works impressively well in energy savings, although the results must be balanced with other real considerations, such as realistic constraints on chillers’ operational capacity. The intuitive visualization of the performance of the two families of algorithms in a real multi-HVAC system increases the novelty of this proposal.post-print888 K

    Active Processor Scheduling Using Evolution Algorithms

    Get PDF
    The allocation of processes to processors has long been of interest to engineers. The processor allocation problem considered here assigns multiple applications onto a computing system. With this algorithm researchers could more efficiently examine real-time sensor data like that used by United States Air Force digital signal processing efforts or real-time aerosol hazard detection as examined by the Department of Homeland Security. Different choices for the design of a load balancing algorithm are examined in both the problem and algorithm domains. Evolutionary algorithms are used to find near-optimal solutions. These algorithms incorporate multiobjective coevolutionary and parallel principles to create an effective and efficient algorithm for real-world allocation problems. Three evolutionary algorithms (EA) are developed. The primary algorithm generates a solution to the processor allocation problem. This allocation EA is capable of evaluating objectives in both an aggregate single objective and a Pareto multiobjective manner. The other two EAs are designed for fine turning returned allocation EA solutions. One coevolutionary algorithm is used to optimize the parameters of the allocation algorithm. This meta-EA is parallelized using a coarse-grain approach to improve performance. Experiments are conducted that validate the improved effectiveness of the parallelized algorithm. Pareto multiobjective approach is used to optimize both effectiveness and efficiency objectives. The other coevolutionary algorithm generates difficult allocation problems for testing the capabilities of the allocation EA. The effectiveness of both coevolutionary algorithms for optimizing the allocation EA is examined quantitatively using standard statistical methods. Also the allocation EAs objective tradeoffs are analyzed and compared

    A simheuristic approach for evolving agent behaviour in the exploration for novel combat tactics

    Get PDF
    The automatic generation of behavioural models for intelligent agents in military simulation and experimentation remains a challenge. Genetic Algorithms are a global optimization approach which is suitable for addressing complex problems where locating the global optimum is a difficult task. Unlike traditional optimisation techniques such as hill-climbing or derivatives-based methods, Genetic Algorithms are robust for addressing highly multi-modal and discontinuous search landscapes. In this paper, we outline a simheuristic GA-based approach for automatic generation of finite state machine based behavioural models of intelligent agents, where the aim is the identification of novel combat tactics. Rather than evolving states, the proposed approach evolves a sequence of transitions. We also discuss workable starting points for the use of Genetic Algorithms for such scenarios, shedding some light on the associated design and implementation difficulties

    Java message passing interface.

    Get PDF
    by Wan Lai Man.Thesis (M.Phil.)--Chinese University of Hong Kong, 1998.Includes bibliographical references (leaves 76-80).Abstract also in Chinese.Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Background --- p.1Chapter 1.2 --- Objectives --- p.3Chapter 1.3 --- Contributions --- p.4Chapter 1.4 --- Overview --- p.4Chapter 2 --- Literature Review --- p.6Chapter 2.1 --- Message Passing Interface --- p.6Chapter 2.1.1 --- Point-to-Point Communication --- p.7Chapter 2.1.2 --- Persistent Communication Request --- p.8Chapter 2.1.3 --- Collective Communication --- p.8Chapter 2.1.4 --- Derived Datatype --- p.9Chapter 2.2 --- Communications in Java --- p.10Chapter 2.2.1 --- Object Serialization --- p.10Chapter 2.2.2 --- Remote Method Invocation --- p.11Chapter 2.3 --- Performances Issues in Java --- p.11Chapter 2.3.1 --- Byte-code Interpreter --- p.11Chapter 2.3.2 --- Just-in-time Compiler --- p.12Chapter 2.3.3 --- HotSpot --- p.13Chapter 2.4 --- Parallel Computing in Java --- p.14Chapter 2.4.1 --- JavaMPI --- p.15Chapter 2.4.2 --- Bayanihan --- p.15Chapter 2.4.3 --- JPVM --- p.15Chapter 3 --- Infrastructure --- p.17Chapter 3.1 --- Layered Model --- p.17Chapter 3.2 --- Java Parallel Environment --- p.19Chapter 3.2.1 --- Job Coordinator --- p.20Chapter 3.2.2 --- HostApplet --- p.20Chapter 3.2.3 --- Formation of Java Parallel Environment --- p.21Chapter 3.2.4 --- Spawning Processes --- p.24Chapter 3.2.5 --- Message-passing Mechanism --- p.28Chapter 3.3 --- Application Programming Interface --- p.28Chapter 3.3.1 --- Message Routing --- p.29Chapter 3.3.2 --- Language Binding for MPI in Java --- p.31Chapter 4 --- Programming in JMPI --- p.35Chapter 4.1 --- JMPI Package --- p.35Chapter 4.2 --- Application Startup Procedure --- p.37Chapter 4.2.1 --- MPI --- p.38Chapter 4.2.2 --- JMPI --- p.38Chapter 4.3 --- Example --- p.39Chapter 5 --- Processes Management --- p.42Chapter 5.1 --- Background --- p.42Chapter 5.2 --- Scheduler Model --- p.43Chapter 5.3 --- Load Estimation --- p.45Chapter 5.3.1 --- Cost Ratios --- p.47Chapter 5.4 --- Task Distribution --- p.49Chapter 6 --- Performance Evaluation --- p.51Chapter 6.1 --- Testing Environment --- p.51Chapter 6.2 --- Latency from Java --- p.52Chapter 6.2.1 --- Benchmarking --- p.52Chapter 6.2.2 --- Experimental Results in Computation Costs --- p.52Chapter 6.2.3 --- Experimental Results in Communication Costs --- p.55Chapter 6.3 --- Latency from JMPI --- p.56Chapter 6.3.1 --- Benchmarking --- p.56Chapter 6.3.2 --- Experimental Results --- p.58Chapter 6.4 --- Application Granularity --- p.62Chapter 6.5 --- Scheduling Enable --- p.64Chapter 7 --- Conclusion --- p.66Chapter 7.1 --- Summary of the thesis --- p.66Chapter 7.2 --- Future work --- p.67Chapter A --- Performance Metrics and Benchmark --- p.69Chapter A.1 --- Model and Metrics --- p.69Chapter A.1.1 --- Measurement Model --- p.69Chapter A.1.2 --- Performance Metrics --- p.70Chapter A.1.3 --- Communication Parameters --- p.72Chapter A.2 --- Benchmarking --- p.73Chapter A.2.1 --- Ping --- p.73Chapter A.2.2 --- PingPong --- p.74Chapter A.2.3 --- Collective --- p.74Bibliography --- p.7

    Enhanced non-parametric sequence learning scheme for internet of things sensory data in cloud infrastructure

    Get PDF
    The Internet of Things (IoT) Cloud is an emerging technology that enables machine-to-machine, human-to-machine and human-to-human interaction through the Internet. IoT sensor devices tend to generate sensory data known for their dynamic and heterogeneous nature. Hence, it makes it elusive to be managed by the sensor devices due to their limited computation power and storage space. However, the Cloud Infrastructure as a Service (IaaS) leverages the limitations of the IoT devices by making its computation power and storage resources available to execute IoT sensory data. In IoT-Cloud IaaS, resource allocation is the process of distributing optimal resources to execute data request tasks that comprise data filtering operations. Recently, machine learning, non-heuristics, multi-objective and hybrid algorithms have been applied for efficient resource allocation to execute IoT sensory data filtering request tasks in IoT-enabled Cloud IaaS. However, the filtering task is still prone to some challenges. These challenges include global search entrapment of event and error outlier detection as the dimension of the dataset increases in size, the inability of missing data recovery for effective redundant data elimination and local search entrapment that leads to unbalanced workloads on available resources required for task execution. In this thesis, the enhancement of Non-Parametric Sequence Learning (NPSL), Perceptually Important Point (PIP) and Efficient Energy Resource Ranking- Virtual Machine Selection (ERVS) algorithms were proposed. The Non-Parametric Sequence-based Agglomerative Gaussian Mixture Model (NPSAGMM) technique was initially utilized to improve the detection of event and error outliers in the global space as the dimension of the dataset increases in size. Then, Perceptually Important Points K-means-enabled Cosine and Manhattan (PIP-KCM) technique was employed to recover missing data to improve the elimination of duplicate sensed data records. Finally, an Efficient Resource Balance Ranking- based Glow-warm Swarm Optimization (ERBV-GSO) technique was used to resolve the local search entrapment for near-optimal solutions and to reduce workload imbalance on available resources for task execution in the IoT-Cloud IaaS platform. Experiments were carried out using the NetworkX simulator and the results of N-PSAGMM, PIP-KCM and ERBV-GSO techniques with N-PSL, PIP, ERVS and Resource Fragmentation Aware (RF-Aware) algorithms were compared. The experimental results showed that the proposed NPSAGMM, PIP-KCM, and ERBV-GSO techniques produced a tremendous performance improvement rate based on 3.602%/6.74% Precision, 9.724%/8.77% Recall, 5.350%/4.42% Area under Curve for the detection of event and error outliers. Furthermore, the results indicated an improvement rate of 94.273% F1-score, 0.143 Reduction Ratio, and with minimum 0.149% Root Mean Squared Error for redundant data elimination as well as the minimum number of 608 Virtual Machine migrations, 47.62% Resource Utilization and 41.13% load balancing degree for the allocation of desired resources deployed to execute sensory data filtering tasks respectively. Therefore, the proposed techniques have proven to be effective for improving the load balancing of allocating the desired resources to execute efficient outlier (Event and Error) detection and eliminate redundant data records in the IoT-based Cloud IaaS Infrastructure
    corecore