846 research outputs found
Adaptive learning, endogenous inattention, and changes in monetary policy
This paper develops an adaptive learning formulation of an extension to the Ball, Mankiw, and Reis (2005) sticky information model that incorporates endogenous inattention. We show that, following an exogenous increase in the policymaker’s preferences for price vs. output stability, the learning process can converge to a new equilibrium in which both output and price volatility are lower.Monetary policy ; Information theory
Energy-aware simulation of workflow execution in High Throughput Computing systems
Workflows offer a great potential for enacting corelated jobs in an automated manner. This is especially desirable when workflows are large or there is a desire to run a workflow multiple times. Much research has been conducted in reducing the makespan of running workflows and maximising the utilisation of the resources they run on, with some existing research investigates how to reduce the energy consumption of workflows on dedicated resources. We extend the HTC-Sim simulation framework to support workflows allowing us to evaluate different scheduling strategies on the overheads and energy consumption of workflows run on non-dedicated systems. We evaluate a number of scheduling strategies from the literature in an environment where (workflow) jobs can be evicted by higher priority users
Recommended from our members
GRIDCC: Real-time workflow system
The Grid is a concept which allows the sharing of resources between distributed communities, allowing each to progress towards potentially different goals. As adoption of the Grid increases so are the activities that people wish to conduct through it. The GRIDCC project is a European Union funded project addressing the issues of integrating instruments into the Grid. This increases the requirement of workflows and Quality of Service upon these workflows as many of these instruments have real-time requirements. In this paper we present the workflow management service within the GRIDCC project which is tasked with optimising the workflows and ensuring that they meet the pre-defined QoS requirements specified upon them
How much data do I need? A case study on medical data
The collection of data to train a Deep Learning network is costly in terms of
effort and resources. In many cases, especially in a medical context, it may
have detrimental impacts. Such as requiring invasive medical procedures or
processes which could in themselves cause medical harm. However, Deep Learning
is seen as a data hungry method. Here, we look at two commonly held adages i)
more data gives better results and ii) transfer learning will aid you when you
don't have enough data. These are widely assumed to be true and used as
evidence for choosing how to solve a problem when Deep Learning is involved. We
evaluate six medical datasets and six general datasets. Training a ResNet18
network on varying subsets of these datasets to evaluate `more data gives
better results'. We take eleven of these datasets as the sources for Transfer
Learning on subsets of the twelfth dataset -- Chest -- in order to determine
whether Transfer Learning is universally beneficial. We go further to see
whether multi-stage Transfer Learning provides a consistent benefit. Our
analysis shows that the real situation is more complex than these simple adages
-- more data could lead to a case of diminishing returns and an incorrect
choice of dataset for transfer learning can lead to worse performance, with
datasets which we would consider highly similar to the Chest dataset giving
worse results than datasets which are more dissimilar. Multi-stage transfer
learning likewise reveals complex relationships between datasets.Comment: 10 pages, 7 figure
Stochastic Workflow Scheduling with QoS Guarantees in Grid Computing Environments
Grid computing infrastructures embody a cost-effective computing paradigm that virtualises heterogenous system resources to meet the dynamic needs of critical business and scientific applications. These applications range from batch processes and long-running tasks to more real-time and even transactional applications. Grid schedulers aim to make efficient use of Grid resources in a cost-effective way, while satisfying the Quality-of-Service requirements of the applications. Scheduling in such a large-scale, dynamic and distributed environment is a complex undertaking. In this paper, we propose an approach to Grid scheduling which abstracts over the details of individual applications and aims to provide a globally optimal schedule, while having the ability to dynamically adjust to varying workloa
Energy-efficient checkpointing in high-throughput cycle-stealing distributed systems
Checkpointing is a fault-tolerance mechanism commonly used in High Throughput Computing (HTC) environments to allow the execution of long-running computational tasks on compute resources subject to hardware or software failures as well as interruptions from resource owners and more important tasks. Until recently many researchers have focused on the performance gains achieved through checkpointing, but now with growing scrutiny of the energy consumption of IT infrastructures it is increasingly important to understand the energy impact of checkpointing within an HTC environment. In this paper we demonstrate through trace-driven simulation of real-world datasets that existing checkpointing strategies are inadequate at maintaining an acceptable level of energy consumption whilst maintaing the performance gains expected with checkpointing. Furthermore, we identify factors important in deciding whether to exploit checkpointing within an HTC environment, and propose novel strategies to curtail the energy consumption of checkpointing approaches whist maintaining the performance benefits
- …