448,425 research outputs found
Comparing Cost Functions in Resource Analysis
Cost functions provide information about the amount of resources required to execute a program in terms of the sizes of input arguments. They can provide an upper-bound, a lower-bound, or the average-case cost. Motivated by the existence of a number of automatic cost analyzers which produce cost functions, we propose an approach for automatically proving that a cost function is smaller than another one. In all applications of resource analysis, such as resource-usage verification, program synthesis and optimization, etc., it is essential to compare cost functions. This allows choosing an implementation with smaller cost or guaranteeing that the given resource-usage bounds are preserved. Unfortunately, automatically generated cost functions for realistic programs tend to be rather intricate, defined by multiple cases, involving non-linear subexpressions (e.g., exponential, polynomial and logarithmic) and they can contain multiple variables, possibly related by means of constraints. Thus, comparing cost functions is far from trivial. Our approach first syntactically transforms functions into simpler forms and then applies a number of su!cient conditions which guarantee that a set of expressions is smaller than another expression. Our preliminary implementation in the COSTA system indicates that the approach can be useful in practic
Towards Energy Consumption Verification via Static Analysis
In this paper we leverage an existing general framework for resource usage
verification and specialize it for verifying energy consumption specifications
of embedded programs. Such specifications can include both lower and upper
bounds on energy usage, and they can express intervals within which energy
usage is to be certified to be within such bounds. The bounds of the intervals
can be given in general as functions on input data sizes. Our verification
system can prove whether such energy usage specifications are met or not. It
can also infer the particular conditions under which the specifications hold.
To this end, these conditions are also expressed as intervals of functions of
input data sizes, such that a given specification can be proved for some
intervals but disproved for others. The specifications themselves can also
include preconditions expressing intervals for input data sizes. We report on a
prototype implementation of our approach within the CiaoPP system for the XC
language and XS1-L architecture, and illustrate with an example how embedded
software developers can use this tool, and in particular for determining values
for program parameters that ensure meeting a given energy budget while
minimizing the loss in quality of service.Comment: Presented at HIP3ES, 2015 (arXiv: 1501.03064
Highly optimized tolerance and power laws in dense and sparse resource regimes
Power law cumulative frequency vs. event size distributions
are frequently cited as evidence for complexity and
serve as a starting point for linking theoretical models and mechanisms with
observed data. Systems exhibiting this behavior present fundamental
mathematical challenges in probability and statistics. The broad span of length
and time scales associated with heavy tailed processes often require special
sensitivity to distinctions between discrete and continuous phenomena. A
discrete Highly Optimized Tolerance (HOT) model, referred to as the
Probability, Loss, Resource (PLR) model, gives the exponent as a
function of the dimension of the underlying substrate in the sparse
resource regime. This agrees well with data for wildfires, web file sizes, and
electric power outages. However, another HOT model, based on a continuous
(dense) distribution of resources, predicts . In this paper we
describe and analyze a third model, the cuts model, which exhibits both
behaviors but in different regimes. We use the cuts model to show all three
models agree in the dense resource limit. In the sparse resource regime, the
continuum model breaks down, but in this case, the cuts and PLR models are
described by the same exponent.Comment: 19 pages, 13 figure
Financial analysis of a partial manufacturing plant consolidation
Includes bibliographical references
Future capacity growth of energy technologies: are scenarios consistent with historical evidence?
Future scenarios of the energy system under greenhouse gas emission constraints depict dramatic growth in a range of energy technologies. Technological growth dynamics observed historically provide a useful comparator for these future trajectories. We find that historical time series data reveal a consistent relationship between how much a technology’s cumulative installed capacity grows, and how long this growth takes. This relationship between extent (how much) and duration (for how long) is consistent across both energy supply and end-use technologies, and both established and emerging technologies. We then develop and test an approach for using this historical relationship to assess technological trajectories in future scenarios. Our approach for “learning from the past” contributes to the assessment and verification of integrated assessment and energy-economic models used to generate quantitative scenarios. Using data on power generation technologies from two such models, we also find a consistent extent - duration relationship across both technologies and scenarios. This relationship describes future low carbon technological growth in the power sector which appears to be conservative relative to what has been evidenced historically. Specifically, future extents of capacity growth are comparatively low given the lengthy time duration of that growth. We treat this finding with caution due to the low number of data points. Yet it remains counter-intuitive given the extremely rapid growth rates of certain low carbon technologies under stringent emission constraints. We explore possible reasons for the apparent scenario conservatism, and find parametric or structural conservatism in the underlying models to be one possible explanation
- …