889 research outputs found

    Efficient in-situ delay monitoring for chip health tracking

    Get PDF

    Enforcing Environmental Regulation: Implications of Remote Sensing Technology

    Get PDF
    We review economic models of environmental protection and regulatory enforcement to highlight several attributes that are particularly likely to benefit from new enforcement technologies such as remote sensing using satellites in space. These attributes include the quantity and quality of information supplied by the new technologies; the accessibility of the information to regulators, regulatees, and third parties; the cost of the information; and whether the process of information collection can be concealed from the observer. Satellite remote sensing is likely to influence all of these attributes and in general, improve the efficacy of enforcement.

    Managerial Hedging and Portfolio Monitoring

    Get PDF
    Incentive compensation induces correlation between the portfolio of managers and the cash flow of the firms they manage. This correlation exposes managers to risk and hence gives them an incentive to hedge against the poor performance of their firms. We study the agency problem between shareholders and a manager when the manager can hedge his incentive compensation using financial markets and shareholders cannot perfectly monitor the manager’s portfolio in order to keep him from hedging the risk in his compensation. In particular, shareholders can monitor the manager’s portfolio stochastically, and since monitoring is costly governance is imperfect. If managerial hedging is detected, shareholders can seize the payoffs of the manager’s trades. We show that at the optimal contract: (i) the manager’s portfolio is monitored only when the firm performs poorly, (ii) the more costly monitoring is, the more sensitive is the manager’s compensation to firm performance, and (iii)conditional on the firm’s performance, the manager’s compensation is lower when his portfolio is monitored, even if no hedging is revealed by monitoring.executive compensation, incentives, monitoring, corporate governance.

    Design and evaluation of automatic workflow scaling algorithms for multi-tenant SaaS

    Get PDF
    Current Cloud software development efforts to come up with novel Software-as-a-Service (SaaS) applications are, just like traditional software development, usually no longer built from scratch. Instead more and more Cloud developers are opting to use multiple existing components and integrate them in their application workflow. Scaling the resulting application up or down, depending on user/tenant load, in order to keep the SLA, no longer becomes an issue of scaling resources for a single service, rather results in a complex problem of scaling all individual service endpoints in the workflow, depending on their monitored runtime behavior. In this paper, we propose and evaluate algorithms through CloudSim for automatic and runtime scaling of such multi-tenant SaaS workflows. Our results on time-varying workloads show that the proposed algorithms are effective and produce the best cost-quality trade-off while keeping Service Level Agreements (SLAs) in line. Empirically, the proactive algorithm with careful parameter tuning always meets the SLAs while only suffering a marginal increase in average cost per service component of approximate to 5-8% over our baseline passive algorithm, which, although provides the least cost, suffers from prolonged violation of service component SLAs

    SATTA: a Self-Adaptive Temperature-based TDF awareness methodology for dynamically reconfigurable FPGAs

    Get PDF
    Dependability issues due to non functional properties are emerging as major cause of faults in modern digital systems. Effective countermeasures have to be presented to properly manage their critical timing effects. This paper presents a methodology to avoid transition delay faults in FPGA-based systems, with low area overhead. The approach is able to exploit temperature information and aging characteristics to minimize the cost in terms of performances degradation and power consumption. The architecture of a hardware manager able to avoid delay faults is presented and deeply analyzed, as well as its integration in the standard implementation design flow

    Integrated Circuit Design for Radiation Sensing and Hardening.

    Full text link
    Beyond the 1950s, integrated circuits have been widely used in a number of electronic devices surrounding people’s lives. In addition to computing electronics, scientific and medical equipment have also been undergone a metamorphosis, especially in radiation related fields where compact and precision radiation detection systems for nuclear power plants, positron emission tomography (PET), and radiation hardened by design (RHBD) circuits for space applications fabricated in advanced manufacturing technologies are exposed to the non-negligible probability of soft errors by radiation impact events. The integrated circuit design for radiation measurement equipment not only leads to numerous advantages on size and power consumption, but also raises many challenges regarding the speed and noise to replace conventional design modalities. This thesis presents solutions to front-end receiver designs for radiation sensors as well as an error detection and correction method to microprocessor designs under the condition of soft error occurrence. For the first preamplifier design, a novel technique that enhances the bandwidth and suppresses the input current noise by using two inductors is discussed. With the dual-inductor TIA signal processing configuration, one can reduce the fabrication cost, the area overhead, and the power consumption in a fast readout package. The second front-end receiver is a novel detector capacitance compensation technique by using the Miller effect. The fabricated CSA exhibits minimal variation in the pulse shape as the detector capacitance is increased. Lastly, a modified D flip-flop is discussed that is called Razor-Lite using charge-sharing at internal nodes to provide a compact EDAC design for modern well-balanced processors and RHBD against soft errors by SEE.PhDElectrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/111548/1/iykwon_1.pd

    Trusted UAV Network Coverage using Blockchain, Machine Learning and Auction Mechanisms

    Get PDF
    The UAV is emerging as one of the greatest technology developments for rapid network coverage provisioning at affordable cost. The aim of this paper is to outsource network coverage of a specific area according to a desired quality of service requirement and to enable various entities in the network to have intelligence to make autonomous decisions using blockchain and auction mechanisms. In this regard, by considering a multiple-UAV network where each UAV is associated to its own controlling operator, this paper addresses two major challenges: the selection of the UAV for the desired quality of network coverage and the development of a distributed and autonomous real-time monitoring framework for the enforcement of service level agreement (SLA). For a suitable UAV selection, we employ a reputation-based auction mechanism to model the interaction between the business agent who is interested in outsourcing the network coverage and the UAV operators serving in closeby areas. In addition, theoretical analysis is performed to show that the proposed auction mechanism attains a dominant strategy equilibrium. For the SLA enforcement and trust model, we propose a permissioned blockchain architecture considering Support Vector Machine (SVM) for real-time autonomous and distributed monitoring of UAV service. In particular, smart contract features of the blockchain are invoked for enforcing the SLA terms of payment and penalty, and for quantifying the UAV service reputation. Simulation results confirm the accuracy of theoretical analysis and efficacy of the proposed model

    Data Mining of the Thermal Performance of Cool-Pipes in Massive Concrete via In Situ Monitoring

    Get PDF
    Embedded cool-pipes are very important for massive concrete because their cooling effect can effectively avoid thermal cracks. In this study, a data mining approach to analyzing the thermal performance of cool-pipes via in situ monitoring is proposed. Delicate monitoring program is applied in a high arch dam project that provides a good and mass data source. The factors and relations related to the thermal performance of cool-pipes are obtained in a built theory thermal model. The supporting vector machine (SVM) technology is applied to mine the data. The thermal performances of iron pipes and high-density polyethylene (HDPE) pipes are compared. The data mining result shows that iron pipe has a better heat removal performance when flow rate is lower than 50 L/min. It has revealed that a turning flow rate exists for iron pipe which is 80 L/min. The prediction and classification results obtained from the data mining model agree well with the monitored data, which illustrates the validness of the approach

    A Novel Methodology for Error-Resilient Circuits in Near-Threshold Computing

    Get PDF
    Department of Electrical EngineeringThe main goal of designing VLSI system is high performance with low energy consumption. Actually, to realize the human-related techniques, such as internet of things (IoTs) and wearable devices, efficient power management techniques are required. Near threshold computing (NTC) is one of the most well-known techniques which is proposed for the trade-off between energy consumption and performance improvement. With this technique, the solution would be selected by the lowest energy with highest performance. However, NTC suffers a significant performance degradation, which is prone to timing errors. However, main goal of Integrated Circuit (IC) design is making the circuit to always operate correctly though worst-case condition. But, in order to make the circuit always work correctly, considerable area and power overheads may occur. As an alternative, better-than-worst-case (BTWC) design paradigm has been proposed. One of the main design of BTWC design includes error-resilient circuits which detect and correct timing errors, though they cause area and power overheads. In this thesis, we propose various design methodologies which provide an optimal implementation of error-resilient circuits. Slack-based, sensitivity-based methodology and modified Quine-McCluskey (Q-M) algorithm have been exploited to earn the minimum set of error-resilient circuits without any loss of detection ability. From sensitivity-based methodology, benchmark results show that the optimal designs reduces up to 46% monitoring area without compromising error detection ability of the initial error-resilient design. From the Quine-McCluskey (Q-M) algorithm, benchmark results show that optimal design reduces up to 72% of flip-flops which are required to be changed to error-resilient circuits without compromising an error detection ability. In addition, more power and area reduction can be possible when reasonable underestimation of error detection ability is accepted. Monte-Carlo analysis validates that our proposed method is tolerant to process variation.ope

    Cross-Layer Optimization for Power-Efficient and Robust Digital Circuits and Systems

    Full text link
    With the increasing digital services demand, performance and power-efficiency become vital requirements for digital circuits and systems. However, the enabling CMOS technology scaling has been facing significant challenges of device uncertainties, such as process, voltage, and temperature variations. To ensure system reliability, worst-case corner assumptions are usually made in each design level. However, the over-pessimistic worst-case margin leads to unnecessary power waste and performance loss as high as 2.2x. Since optimizations are traditionally confined to each specific level, those safe margins can hardly be properly exploited. To tackle the challenge, it is therefore advised in this Ph.D. thesis to perform a cross-layer optimization for digital signal processing circuits and systems, to achieve a global balance of power consumption and output quality. To conclude, the traditional over-pessimistic worst-case approach leads to huge power waste. In contrast, the adaptive voltage scaling approach saves power (25% for the CORDIC application) by providing a just-needed supply voltage. The power saving is maximized (46% for CORDIC) when a more aggressive voltage over-scaling scheme is applied. These sparsely occurred circuit errors produced by aggressive voltage over-scaling are mitigated by higher level error resilient designs. For functions like FFT and CORDIC, smart error mitigation schemes were proposed to enhance reliability (soft-errors and timing-errors, respectively). Applications like Massive MIMO systems are robust against lower level errors, thanks to the intrinsically redundant antennas. This property makes it applicable to embrace digital hardware that trades quality for power savings.Comment: 190 page
    corecore