4,542 research outputs found

    Project management decisions with uncertain targets

    Get PDF
    Project management decisions with uncertain target

    Finish Them!: Pricing Algorithms for Human Computation

    Full text link
    Given a batch of human computation tasks, a commonly ignored aspect is how the price (i.e., the reward paid to human workers) of these tasks must be set or varied in order to meet latency or cost constraints. Often, the price is set up-front and not modified, leading to either a much higher monetary cost than needed (if the price is set too high), or to a much larger latency than expected (if the price is set too low). Leveraging a pricing model from prior work, we develop algorithms to optimally set and then vary price over time in order to meet a (a) user-specified deadline while minimizing total monetary cost (b) user-specified monetary budget constraint while minimizing total elapsed time. We leverage techniques from decision theory (specifically, Markov Decision Processes) for both these problems, and demonstrate that our techniques lead to upto 30\% reduction in cost over schemes proposed in prior work. Furthermore, we develop techniques to speed-up the computation, enabling users to leverage the price setting algorithms on-the-fly

    Flexible provisioning of Web service workflows

    No full text
    Web services promise to revolutionise the way computational resources and business processes are offered and invoked in open, distributed systems, such as the Internet. These services are described using machine-readable meta-data, which enables consumer applications to automatically discover and provision suitable services for their workflows at run-time. However, current approaches have typically assumed service descriptions are accurate and deterministic, and so have neglected to account for the fact that services in these open systems are inherently unreliable and uncertain. Specifically, network failures, software bugs and competition for services may regularly lead to execution delays or even service failures. To address this problem, the process of provisioning services needs to be performed in a more flexible manner than has so far been considered, in order to proactively deal with failures and to recover workflows that have partially failed. To this end, we devise and present a heuristic strategy that varies the provisioning of services according to their predicted performance. Using simulation, we then benchmark our algorithm and show that it leads to a 700% improvement in average utility, while successfully completing up to eight times as many workflows as approaches that do not consider service failures

    Moral Hazard, Incentive Contracts and Risk: Evidence from Procurement

    Get PDF
    Deadlines and penalties are widely used to incentivize effort. We model how these incentive contracts affect the work rate and time taken in a procurement setting, characterizing the efficient contract design. Using new micro-level data on Minnesota highway construction contracts that includes day-by-day information on work plans, hours actually worked and delays, we find evidence of moral hazard. As an application, we build an econometric model that endogenizes the work rate, and simulate how different incentive structures affect outcomes and the variance of contractor payments. Accounting for the traffic delays caused by construction, switching to a more efficient design would substantially increase welfare without substantially increasing the risk borne by contractors.

    Stochastic scheduling and workload allocation : QoS support and profitable brokering in computing grids

    No full text
    Abstract: The Grid can be seen as a collection of services each of which performs some functionality. Users of the Grid seek to use combinations of these services to perform the overall task they need to achieve. In general this can be seen as aset of services with a workflow document describing how these services should be combined. The user may also have certain constraints on the workflow operations, such as execution time or cost ----t~ th~ user, specified in the form of a Quality of Service (QoS) document. The users . submit their workflow to a brokering service along with the QoS document. The brokering service's task is to map any given workflow to a subset of the Grid services taking the QoS and state of the Grid into account -- service availability and performance. We propose an approach for generating constraint equations describing the workflow, the QoS requirements and the state of the Grid. This set of equations may be solved using Mixed-Integer Linear Programming (MILP), which is the traditional method. We further develop a novel 2-stage stochastic MILP which is capable of dealing with the volatile nature of the Grid and adapting the selection of the services during the lifetime of the workflow. We present experimental results comparing our approaches, showing that the . 2-stage stochastic programming approach performs consistently better than other traditional approaches. Next we addresses workload allocation techniques for Grid workflows in a multi-cluster Grid We model individual clusters as MIMIk. queues and obtain a numerical solutio~ for missed deadlines (failures) of tasks of Grid workflows. We also present an efficient algorithm for obtaining workload allocations of clusters. Next we model individual cluster resources as G/G/l queues and solve an optimisation problem that minimises QoS requirement violation, provides QoS guarantee and outperforms reservation based scheduling algorithms. Both approaches are evaluated through an experimental simulation and the results confirm that the proposed workload allocation strategies combined with traditional scheduling algorithms performs considerably better in terms of satisfying QoS requirements of Grid workflows than scheduling algorithms that don't employ such workload allocation techniques. Next we develop a novel method for Grid brokers that aims at maximising profit whilst satisfying end-user needs with a sufficient guarantee in a volatile utility Grid. We develop a develop a 2-stage stochastic MILP which is capable of dealing with the volatile nature . of the Grid and obtaining cost bounds that ensure that end-user cost is minimised or satisfied and broker's profit is maximised with sufficient guarantee. These bounds help brokers know beforehand whether the budget limits of end-users can be satisfied and. if not then???????? obtain appropriate future leases from service providers. Experimental results confirm the efficacy of our approach.Imperial Users onl

    From Packet to Power Switching: Digital Direct Load Scheduling

    Full text link
    At present, the power grid has tight control over its dispatchable generation capacity but a very coarse control on the demand. Energy consumers are shielded from making price-aware decisions, which degrades the efficiency of the market. This state of affairs tends to favor fossil fuel generation over renewable sources. Because of the technological difficulties of storing electric energy, the quest for mechanisms that would make the demand for electricity controllable on a day-to-day basis is gaining prominence. The goal of this paper is to provide one such mechanisms, which we call Digital Direct Load Scheduling (DDLS). DDLS is a direct load control mechanism in which we unbundle individual requests for energy and digitize them so that they can be automatically scheduled in a cellular architecture. Specifically, rather than storing energy or interrupting the job of appliances, we choose to hold requests for energy in queues and optimize the service time of individual appliances belonging to a broad class which we refer to as "deferrable loads". The function of each neighborhood scheduler is to optimize the time at which these appliances start to function. This process is intended to shape the aggregate load profile of the neighborhood so as to optimize an objective function which incorporates the spot price of energy, and also allows distributed energy resources to supply part of the generation dynamically.Comment: Accepted by the IEEE journal of Selected Areas in Communications (JSAC): Smart Grid Communications series, to appea

    Generalizing List Scheduling for Stochastic Soft Real-time Parallel Applications

    Get PDF
    Advanced architecture processors provide features such as caches and branch prediction that result in improved, but variable, execution time of software. Hard real-time systems require tasks to complete within timing constraints. Consequently, hard real-time systems are typically designed conservatively through the use of tasks? worst-case execution times (WCET) in order to compute deterministic schedules that guarantee task?s execution within giving time constraints. This use of pessimistic execution time assumptions provides real-time guarantees at the cost of decreased performance and resource utilization. In soft real-time systems, however, meeting deadlines is not an absolute requirement (i.e., missing a few deadlines does not severely degrade system performance or cause catastrophic failure). In such systems, a guaranteed minimum probability of completing by the deadline is sufficient. Therefore, there is considerable latitude in such systems for improving resource utilization and performance as compared with hard real-time systems, through the use of more realistic execution time assumptions. Given probability distribution functions (PDFs) representing tasks? execution time requirements, and tasks? communication and precedence requirements, represented as a directed acyclic graph (DAG), this dissertation proposes and investigates algorithms for constructing non-preemptive stochastic schedules. New PDF manipulation operators developed in this dissertation are used to compute tasks? start and completion time PDFs during schedule construction. PDFs of the schedules? completion times are also computed and used to systematically trade the probability of meeting end-to-end deadlines for schedule length and jitter in task completion times. Because of the NP-hard nature of the non-preemptive DAG scheduling problem, the new stochastic scheduling algorithms extend traditional heuristic list scheduling and genetic list scheduling algorithms for DAGs by using PDFs instead of fixed time values for task execution requirements. The stochastic scheduling algorithms also account for delays caused by communication contention, typically ignored in prior DAG scheduling research. Extensive experimental results are used to demonstrate the efficacy of the new algorithms in constructing stochastic schedules. Results also show that through the use of the techniques developed in this dissertation, the probability of meeting deadlines can be usefully traded for performance and jitter in soft real-time systems
    corecore