946 research outputs found

    Benefits in Relaxing the Power Capping Constraint

    Get PDF
    open3siWork supported by the EU FETHPC project ANTAREX (g.a. 671623),EU project ExaNoDe (g.a. 671578), and EU ERC Project MULTI-THERMAN (g.a. 291125).In this manuscript we evaluate the impact of HW power capping mechanisms on a real scientific application composed by parallel execution. By comparing HW capping mechanism against static frequency allocation schemes we show that a speed up can be achieved if the power constraint is enforced in average, during the application run, instead of on short time periods. RAPL, which enforces the power constraint on a few ms time scale, fails on sharing power budget between more demanding and less demanding application phases.openCesarini, Daniele; Bartolini, Andrea; Benini, LucaCesarini, Daniele; Bartolini, Andrea; Benini, Luc

    Adaptive Knobs for Resource Efficient Computing

    Get PDF
    Performance demands of emerging domains such as artificial intelligence, machine learning and vision, Internet-of-things etc., continue to grow. Meeting such requirements on modern multi/many core systems with higher power densities, fixed power and energy budgets, and thermal constraints exacerbates the run-time management challenge. This leaves an open problem on extracting the required performance within the power and energy limits, while also ensuring thermal safety. Existing architectural solutions including asymmetric and heterogeneous cores and custom acceleration improve performance-per-watt in specific design time and static scenarios. However, satisfying applications’ performance requirements under dynamic and unknown workload scenarios subject to varying system dynamics of power, temperature and energy requires intelligent run-time management. Adaptive strategies are necessary for maximizing resource efficiency, considering i) diverse requirements and characteristics of concurrent applications, ii) dynamic workload variation, iii) core-level heterogeneity and iv) power, thermal and energy constraints. This dissertation proposes such adaptive techniques for efficient run-time resource management to maximize performance within fixed budgets under unknown and dynamic workload scenarios. Resource management strategies proposed in this dissertation comprehensively consider application and workload characteristics and variable effect of power actuation on performance for pro-active and appropriate allocation decisions. Specific contributions include i) run-time mapping approach to improve power budgets for higher throughput, ii) thermal aware performance boosting for efficient utilization of power budget and higher performance, iii) approximation as a run-time knob exploiting accuracy performance trade-offs for maximizing performance under power caps at minimal loss of accuracy and iv) co-ordinated approximation for heterogeneous systems through joint actuation of dynamic approximation and power knobs for performance guarantees with minimal power consumption. The approaches presented in this dissertation focus on adapting existing mapping techniques, performance boosting strategies, software and dynamic approximations to meet the performance requirements, simultaneously considering system constraints. The proposed strategies are compared against relevant state-of-the-art run-time management frameworks to qualitatively evaluate their efficacy

    How should public infrastructure be financed?

    Get PDF
    Infrastructure (Economics) ; Finance ; Public policy

    "Managed Care, Physician Incentives, and Norms of Medical Practice: Racing to the Bottom or Pulling to the Top?"

    Get PDF
    The incentive contracts that managed care organizations write with physicians have generated considerable controversy. Critics fear that if informational asymmetries inhibit patients from directly assessing the quality of care provided by their physician, competition will lead to a "race to the bottom" in which managed care plans induce physicians to offer only minimal levels of care. To analyze this issue we propose a model of competition between managed care organizations. The model serves for both physician incentive contracts and HMO product market strategies in an environment of extreme information asymmetry--physicians perceive quality of care perfectly, and patients don't perceive it at all. We find that even in this stark setting, managed care organizations need not race to the bottom. Rather, the combination of product differentiation and physician practice norms causes managed care organizations to race to differing market niches, with some providing high levels of care as a means of assembling large physician networks. We also find that relative physician practice norms, defined endogenously by the standards of medical care prevailing in a market, exert a "pull to the top" that raises the quality of care provided by all managed care organizations in the market. We conclude by considering the implications of our model for public policies designed to limit the influence of HMO incentive systems.

    A Dynamic Incentive Mechanism for Transmission Expansion in Electricity Networks: Theory, Modeling, and Application

    Get PDF
    We propose a price-cap mechanism for electricity-transmission expansion based on redefining transmission output in terms of financial transmission rights. Our mechanism applies the incentive-regulation logic of rebalancing a two-part tariff. First, we test this mechanism in a three-node network. We show that the mechanism intertemporally promotes an investment pattern that relieves congestion, increases welfare, augments the Transco´s profits, and induces convergence of prices to marginal costs. We then apply the mechanism to a grid of northwestern Europe and show a gradual convergence toward a common-price benchmark, an increase in total capacity, and convergence toward the welfare optimum.Electricity transmission expansion, incentive regulation

    Future rent-seeking and current public savings

    Get PDF
    October 9, 200

    Government Gains from Self-Restraint: A Bargaining Theory of Inefficient Redistribution

    Get PDF
    We present a bargaining model of the interaction between a government and interest groups in which, unlike most existing models, neither side is assumed to have all the bargaining power. The government finds it optimal to constrain itself in the use of transfer policies to improve its bargaining position. In a model of redistribution to lobbies, the government finds it optimal to cap the size of lump-sum transfers it makes below the unconstrained equilibrium level. With a binding cap on efficient subsidies in place, less efficient subsidies will be used for redistribution even when they serve no economic function. Analogously, if it must choose either efficient or inefficient transfers, it may find it optimal to forego use of the former if its bargaining power relative to the lobby is sufficiently low. Even if the lobby can bargain over the type of redistribution policy with the government, the inefficient policy may still be used in equilibrium. If policymakers are elected, rational fully informed voters may choose a candidate who implements the inefficient policy over one who would implement the efficient policy and may prefer the candidate with the lower weight on voter welfare We thus offer an alternative theory that explains why governments may optimally choose to restrict efficient lump-sum transfers to interest groups and replace them with relatively less efficient transfers.

    The safety valve and climate policy

    Get PDF
    Abstract in HTML and technical report in PDF available on the Massachusetts Institute of Technology Joint Program on the Science and Policy of Global Change Website. (http://mit.edu/globalchange/www/)Includes bibliographical references (p. 11).In discussions of a cap-and-trade system for implementation of Kyoto Protocol-type quantity targets, a "safety valve" was proposed where, by government sales of emissions permits at a fixed price, the marginal cost of the effort could be limited to a predetermined level. The advantages seen for such a hybrid system included the shifting of the Kyoto architecture toward a price-based system, and the blunting of opposition to the Protocol on the basis of anticipated high cost. This paper reviews the theoretical underpinnings of the preference for a price instrument for controlling stock pollutants like greenhouse gases, and summarizes the arguments supporting and opposing the safety valve idea within the policy debate. If, in the face of uncertainty, emissions are to be limited to a fixed quantity target, then some means needs to be provided to avoid complete inflexibility. A safety valve can serve this function, although similar advantages can be achieved by the phasing in of quantity targets, coupled with provision for banking and borrowing

    Power and Thermal Management Runtimes for HPC Applications in the Era of Exascale Computing

    Get PDF
    In the scope of technical and scientific computing the rush towards larger simulations, has been so far assisted by a steady downsizing of micro-processing units, which has allowed to increase the compute capacity of general-purpose architectures at constant power. As side effects of the end of Dennard's scaling, this process is now hitting its ultimate power limits and is just about to come to an end. The continuous grow of power consumption in supercomputers, requires a well-defined power budget at design time which should considers the worst-case power consumption to avoid outages. But supercomputers rarely cause the worst-case power consumption during their lifetime limiting the performance achievable in normal conditions. Another drawback of the end of the Dennard's scaling is that power density starts to increase at every technological step leading to overheating and thermal gradients. As result, thermal-bound machines show performance degradation and heterogeneity which limit the peak performance of the system. Moreover, it is well known that in large application runs, the time spent by the application in the communication is not negligible and impacts the power consumption of the system. This thesis presents software strategies to tackle the main bottlenecks induced by power and thermal issues that affects next-generation supercomputers. The thesis targets scientific applications which are the principal candidates “suffering” from the power and thermal constraints of supercomputers. To respond to the above challenges, this work shows that propagating workload requirements from application to the runtime and operating system levels is the key to provide efficiency. This is possible only if the proposed software methodologies cause little or no overhead in term of application performance. The experimental results show a significant step forward with respect to the current state-of-the-art solutions in power and thermal control of HPC systems

    Efficient Emission Fees in the U.S. Electricity Sector

    Get PDF
    This paper provides new estimates of efficient emission fees for sulfur dioxide (SO2) and nitrogen oxides (NOX) emissions in the U.S. electricity sector. The estimates are obtained by coupling a detailed simulation model of the U.S. electricity markets with an integrated assessment model that links changes in emissions with atmospheric transport, environmental endpoints, and valuation of impacts. Efficient fees are found by comparing incremental benefits with emission levels. National quantity caps that are equivalent to these fees also are computed, and found to approximate caps under consideration in the current multi-pollutant debate in the U.S. Congress and the recent proposals from the Bush administration for the electricity industry. We also explore whether regional differentiation of caps on different pollutants is likely to enhance efficiency.emissions trading, emission fees, air pollution, cost-benefit analysis, electricity, particulates, nitrogen oxides, NOx, sulfur dioxide, SO2, health benefits
    corecore