7,705 research outputs found

    Minimisation of energy consumption variance for multi-process manufacturing lines through genetic algorithm manipulation of production schedule

    Get PDF
    Typical manufacturing scheduling algorithms do not consider the energy consumption of each job, or its variance, when they generate a production schedule. This can become problematic for manufacturers when local infrastructure has limited energy distribution capabilities. In this paper, a genetic algorithm based schedule modification algorithm is presented. By referencing energy consumption models for each job, adjustments are made to the original schedule so that it produces a minimal variance in the total energy consumption in a multi-process manufacturing production line, all while operating within the constraints of the manufacturing line and individual processes. Empirical results show a significant reduction in energy consumption variance can be achieved on schedules containing multiple concurrent jobs

    Enhancing Job Scheduling of an Atmospheric Intensive Data Application

    Get PDF
    Nowadays, e-Science applications involve great deal of data to have more accurate analysis. One of its application domains is the Radio Occultation which manages satellite data. Grid Processing Management is a physical infrastructure geographically distributed based on Grid Computing, that is implemented for the overall processing Radio Occultation analysis. After a brief description of algorithms adopted to characterize atmospheric profiles, the paper presents an improvement of job scheduling in order to decrease processing time and optimize resource utilization. Extension of grid computing capacity is implemented by virtual machines in existing physical Grid in order to satisfy temporary job requests. Also scheduling plays an important role in the infrastructure that is handled by a couple of schedulers which are developed to manage data automaticall

    Grid Infrastructure for Domain Decomposition Methods in Computational ElectroMagnetics

    Get PDF
    The accurate and efficient solution of Maxwell's equation is the problem addressed by the scientific discipline called Computational ElectroMagnetics (CEM). Many macroscopic phenomena in a great number of fields are governed by this set of differential equations: electronic, geophysics, medical and biomedical technologies, virtual EM prototyping, besides the traditional antenna and propagation applications. Therefore, many efforts are focussed on the development of new and more efficient approach to solve Maxwell's equation. The interest in CEM applications is growing on. Several problems, hard to figure out few years ago, can now be easily addressed thanks to the reliability and flexibility of new technologies, together with the increased computational power. This technology evolution opens the possibility to address large and complex tasks. Many of these applications aim to simulate the electromagnetic behavior, for example in terms of input impedance and radiation pattern in antenna problems, or Radar Cross Section for scattering applications. Instead, problems, which solution requires high accuracy, need to implement full wave analysis techniques, e.g., virtual prototyping context, where the objective is to obtain reliable simulations in order to minimize measurement number, and as consequence their cost. Besides, other tasks require the analysis of complete structures (that include an high number of details) by directly simulating a CAD Model. This approach allows to relieve researcher of the burden of removing useless details, while maintaining the original complexity and taking into account all details. Unfortunately, this reduction implies: (a) high computational effort, due to the increased number of degrees of freedom, and (b) worsening of spectral properties of the linear system during complex analysis. The above considerations underline the needs to identify appropriate information technologies that ease solution achievement and fasten required elaborations. The authors analysis and expertise infer that Grid Computing techniques can be very useful to these purposes. Grids appear mainly in high performance computing environments. In this context, hundreds of off-the-shelf nodes are linked together and work in parallel to solve problems, that, previously, could be addressed sequentially or by using supercomputers. Grid Computing is a technique developed to elaborate enormous amounts of data and enables large-scale resource sharing to solve problem by exploiting distributed scenarios. The main advantage of Grid is due to parallel computing, indeed if a problem can be split in smaller tasks, that can be executed independently, its solution calculation fasten up considerably. To exploit this advantage, it is necessary to identify a technique able to split original electromagnetic task into a set of smaller subproblems. The Domain Decomposition (DD) technique, based on the block generation algorithm introduced in Matekovits et al. (2007) and Francavilla et al. (2011), perfectly addresses our requirements (see Section 3.4 for details). In this chapter, a Grid Computing infrastructure is presented. This architecture allows parallel block execution by distributing tasks to nodes that belong to the Grid. The set of nodes is composed by physical machines and virtualized ones. This feature enables great flexibility and increase available computational power. Furthermore, the presence of virtual nodes allows a full and efficient Grid usage, indeed the presented architecture can be used by different users that run different applications

    Energy-aware integrated process planning and scheduling for job shops

    Get PDF
    Process planning that is based on environmental consciousness and energy-efficient scheduling currently plays a critical role in sustainable manufacturing processes. Despite their interrelationship, these two topics have often been considered to be independent of each other. It therefore would be beneficial to integrate process planning and scheduling for an integrated energy-efficient optimisation of product design and manufacturing in a sustainable manufacturing system. This article proposes an energy-aware mathematical model for job shops that integrates process planning and scheduling. First, a mixed integrated programming model with performance indicators such as energy consumption and scheduling makespan is established to describe a multi-objective optimisation problem. Because the problem is strongly non-deterministic polynomial-time hard (NP-hard), a modified genetic algorithm is adopted to explore the optimal solution (Pareto solution) between energy consumption and makespan. Finally, case studies of energy-aware integrated process planning and scheduling are performed, and the proposed algorithm is compared with other methods. The approach is shown to generate interesting results and can be used to improve the energy efficiency of sustainable manufacturing processes at the process planning and scheduling levels

    A Model of Resource- Aware Load Balancing Scheme using Multi-objective Optimization in Cloud Environment

    Get PDF
    Cloud computing is a new class of network based computing that provides the customers with computing resources as a service over a network on their demand. The unique concept of cloud computing creates new opportunities for Business and IT enterprises to achieve their goals. In cloud computing, usually there are number of jobs that need to be executed with the available resources to achieve optimal performance, least possible total time for completion, shortest response time, and efficient utilization of resources etc. To accomplish these goals and achieve high performance, it is important to design and develop a multi objective scheduling algorithm. Hence it is most challenging to schedule the tasks along with satisfying the userā€™s Quality of Service requirements. This paper proposes a multi- objective scheduling algorithm that considers wide variety of attributes in cloud environment. The paper aims to improve the performance of CPU, memory and network operations by reducing the load of a virtual machine (VM) by using Load Balancing Method. Finally, it optimizes the resource utilization by using Resource Aware Scheduling Algorithm. Keywords: VM, QoS, Non- dominated sorting, Pareto optimal, Makespan, AHP

    Many-Task Computing and Blue Waters

    Full text link
    This report discusses many-task computing (MTC) generically and in the context of the proposed Blue Waters systems, which is planned to be the largest NSF-funded supercomputer when it begins production use in 2012. The aim of this report is to inform the BW project about MTC, including understanding aspects of MTC applications that can be used to characterize the domain and understanding the implications of these aspects to middleware and policies. Many MTC applications do not neatly fit the stereotypes of high-performance computing (HPC) or high-throughput computing (HTC) applications. Like HTC applications, by definition MTC applications are structured as graphs of discrete tasks, with explicit input and output dependencies forming the graph edges. However, MTC applications have significant features that distinguish them from typical HTC applications. In particular, different engineering constraints for hardware and software must be met in order to support these applications. HTC applications have traditionally run on platforms such as grids and clusters, through either workflow systems or parallel programming systems. MTC applications, in contrast, will often demand a short time to solution, may be communication intensive or data intensive, and may comprise very short tasks. Therefore, hardware and software for MTC must be engineered to support the additional communication and I/O and must minimize task dispatch overheads. The hardware of large-scale HPC systems, with its high degree of parallelism and support for intensive communication, is well suited for MTC applications. However, HPC systems often lack a dynamic resource-provisioning feature, are not ideal for task communication via the file system, and have an I/O system that is not optimized for MTC-style applications. Hence, additional software support is likely to be required to gain full benefit from the HPC hardware

    Comparison of agent-based scheduling to look-ahead heuristics for real-time transportation problems

    Get PDF
    We consider the real-time scheduling of full truckload transportation orders with time windows that arrive during schedule execution. Because a fast scheduling method is required, look-ahead heuristics are traditionally used to solve these kinds of problems. As an alternative, we introduce an agent-based approach where intelligent vehicle agents schedule their own routes. They interact with job agents, who strive for minimum transportation costs, using a Vickrey auction for each incoming order. This approach offers several advantages: it is fast, requires relatively little information and facilitates easy schedule adjustments in reaction to information updates. We compare the agent-based approach to more traditional hierarchical heuristics in an extensive simulation experiment. We find that a properly designed multiagent approach performs as good as or even better than traditional methods. Particularly, the multi-agent approach yields less empty miles and a more stable service level
    • ā€¦
    corecore