10,444 research outputs found
Analyzing logic programs with dynamic scheduling
Traditional logic programming languages, such as Prolog, use a fixed left-to-right atom scheduling rule. Recent logic programming languages, however, usually provide more flexible scheduling in which computation generally proceeds leftto- right but in which some calis are dynamically
"delayed" until their arguments are sufRciently instantiated
to allow the cali to run efficiently. Such dynamic scheduling has a significant cost. We give a framework for the global analysis of logic programming languages with dynamic scheduling and show that program analysis based on this framework supports optimizations which remove much
of the overhead of dynamic scheduling
Hybrid static/dynamic scheduling for already optimized dense matrix factorization
We present the use of a hybrid static/dynamic scheduling strategy of the task
dependency graph for direct methods used in dense numerical linear algebra.
This strategy provides a balance of data locality, load balance, and low
dequeue overhead. We show that the usage of this scheduling in communication
avoiding dense factorization leads to significant performance gains. On a 48
core AMD Opteron NUMA machine, our experiments show that we can achieve up to
64% improvement over a version of CALU that uses fully dynamic scheduling, and
up to 30% improvement over the version of CALU that uses fully static
scheduling. On a 16-core Intel Xeon machine, our hybrid static/dynamic
scheduling approach is up to 8% faster than the version of CALU that uses a
fully static scheduling or fully dynamic scheduling. Our algorithm leads to
speedups over the corresponding routines for computing LU factorization in well
known libraries. On the 48 core AMD NUMA machine, our best implementation is up
to 110% faster than MKL, while on the 16 core Intel Xeon machine, it is up to
82% faster than MKL. Our approach also shows significant speedups compared with
PLASMA on both of these systems
Low Power Dynamic Scheduling for Computing Systems
This paper considers energy-aware control for a computing system with two
states: "active" and "idle." In the active state, the controller chooses to
perform a single task using one of multiple task processing modes. The
controller then saves energy by choosing an amount of time for the system to be
idle. These decisions affect processing time, energy expenditure, and an
abstract attribute vector that can be used to model other criteria of interest
(such as processing quality or distortion). The goal is to optimize time
average system performance. Applications of this model include a smart phone
that makes energy-efficient computation and transmission decisions, a computer
that processes tasks subject to rate, quality, and power constraints, and a
smart grid energy manager that allocates resources in reaction to a time
varying energy price. The solution methodology of this paper uses the theory of
optimization for renewal systems developed in our previous work. This paper is
written in tutorial form and develops the main concepts of the theory using
several detailed examples. It also highlights the relationship between online
dynamic optimization and linear fractional programming. Finally, it provides
exercises to help the reader learn the main concepts and apply them to their
own optimizations. This paper is an arxiv technical report, and is a
preliminary version of material that will appear as a book chapter in an
upcoming book on green communications and networking.Comment: 26 pages, 10 figures, single spac
Dynamic scheduling in a multi-product manufacturing system
To remain competitive in global marketplace, manufacturing companies need to improve their operational practices. One of the methods to increase competitiveness in manufacturing is by implementing proper scheduling system. This is important to enable job orders to be completed on time, minimize waiting time and maximize utilization of equipment and machineries. The dynamics of real manufacturing system are very complex in nature. Schedules developed based on deterministic algorithms are unable to effectively deal with uncertainties in demand and capacity. Significant differences can be found between planned schedules and actual schedule implementation. This study attempted to develop a scheduling system that is able to react quickly and reliably for accommodating changes in product demand and manufacturing capacity. A case study, 6 by 6 job shop scheduling problem was adapted with uncertainty elements added to the data sets. A simulation model was designed and implemented using ARENA simulation package to generate various job shop scheduling scenarios. Their performances were evaluated using scheduling rules, namely, first-in-first-out (FIFO), earliest due date (EDD), and shortest processing time (SPT). An artificial neural network (ANN) model was developed and trained using various scheduling scenarios generated by ARENA simulation. The experimental results suggest that the ANN scheduling model can provided moderately reliable prediction results for limited scenarios when predicting the number completed jobs, maximum flowtime, average machine utilization, and average length of queue. This study has provided better understanding on the effects of changes in demand and capacity on the job shop schedules. Areas for further study includes: (i) Fine tune the proposed ANN scheduling model (ii) Consider more variety of job shop environment (iii) Incorporate an expert system for interpretation of results. The theoretical framework proposed in this study can be used as a basis for further investigation
Design citeria for applications with non-manifest loops
In the design process of high-throughput applications, design choices concerning the type of processor architecture and appropriate scheduling mechanism, have to be made. Take a reed-solomon decoder as an example, the amount of clock cycles consumed in decoding a code is dependent on the amount of errors within that code. Since this is not known in advance, and the environment in which the code is transmitted can cause a variable amount of errors within that code, a processor architecture which employs a static scheduling scheme, has to assume the worst case amount of clock cycles in order to cope with the worst case situation and provide correct results. On the other hand a processor that employs a dynamic scheduling scheme, can gain wasted clock cycles, by scheduling the exact amount of clock cycles that are needed and not the amount of clock cycles needed for the worst case situation. Since processor architectures that employ dynamic scheduling schemes have more overhead, designers have to make their choice beforehand. In this paper we address the problem of making the correct choice of whether to use a static or dynamic scheduling scheme. The strategy is to determine whether the application possess non-manifest behavior\ud
and weigh out this dynamic behavior against static scheduling solutions which were quite common in the past. We provide criteria for choosing the correct scheduling architecture for a high throughput application based upon the environmental and algorithm-specification constraints. Keywords¿ Non-manifest loop scheduling, variable latency functional units, dynamic hardware scheduling, self\ud
scheduling hardware units, optimized data-flow machine architecture
Dynamic scheduling: integrating schedule risk analysis with earned value management
The topic of this paper is dynamic project scheduling to illustrate that project scheduling is a dynamic process that involves a continuous stream of changes and is a never ending process to support decisions that need to be made along the life of the project. The focus of this paper lies on three crucial dimensions of dynamic scheduling which can be briefly outlined along the following lines: (i) Baseline scheduling to construct a timetable that provides a start and end date for each project activity, taking activity relations, resource constraints and other project characteristics into account, and aiming to reach a certain scheduling objective, (ii) risk analysis to analyze the strengths and weaknesses of your project schedule in order to obtain information about the schedule sensitivity and the possible changes that undoubtedly occur during project progress and (iii) project control to measure the (time and cost) performance of a project during its progress and use the information obtained during the scheduling and risk analysis steps to monitor and update the project and to take corrective actions in case of problems. The focus of the current paper is on the importance and crucial role of the baseline scheduling component for the two other components, and the integration of the schedule risk and project control component in order to support a better corrective action decision making when the project is in trouble
Dynamic Scheduling for Delay Guarantees for Heterogeneous Cognitive Radio Users
We study an uplink multi secondary user (SU) system having statistical delay
constraints, and an average interference constraint to the primary user (PU).
SUs with heterogeneous interference channel statistics, to the PU, experience
heterogeneous delay performances since SUs causing low interference are
scheduled more frequently than those causing high interference. We propose a
scheduling algorithm that can provide arbitrary average delay guarantees to SUs
irrespective of their statistical channel qualities. We derive the algorithm
using the Lyapunov technique and show that it yields bounded queues and satisfy
the interference constraints. Using simulations, we show its superiority over
the Max-Weight algorithm.Comment: Asilomar 2015. arXiv admin note: text overlap with arXiv:1602.0801
Dynamic Scheduling of Handling Equipment at Automated Container Terminals
In this paper we consider the problem of integrated scheduling of various types of handling equipment at an automated container terminal in a dynamic environment. This means that the handling times are not known exactly beforehand and that the order in which the different pieces of equipment handle the containers need not be specified completely in advance. Instead, (partial) schedules may be updated when new information on realizations of handling times becomes available. We present an optimization based Beam Search heuristic and several dispatching rules. An extensive computational study is carried out to investigate the performance of these solution methods under different scenarios. The main conclusion is that, in our tests, the Beam Search heuristic performs best on average, but that some of the relatively simple dispatching rules perform almost as good. Furthermore, our study indicates that it is effective important to base a planning on a long horizon with inaccurate data, than to update the planning often in order to take newly available information into account.beam search;dynamic scheduling;container terminal;dispatching rules
- …