71 research outputs found

    Managing performance vs. accuracy trade-offs with loop perforation

    Get PDF
    Many modern computations (such as video and audio encoders, Monte Carlo simulations, and machine learning algorithms) are designed to trade off accuracy in return for increased performance. To date, such computations typically use ad-hoc, domain-specific techniques developed specifically for the computation at hand. Loop perforation provides a general technique to trade accuracy for performance by transforming loops to execute a subset of their iterations. A criticality testing phase filters out critical loops (whose perforation produces unacceptable behavior) to identify tunable loops (whose perforation produces more efficient and still acceptably accurate computations). A perforation space exploration algorithm perforates combinations of tunable loops to find Pareto-optimal perforation policies. Our results indicate that, for a range of applications, this approach typically delivers performance increases of over a factor of two (and up to a factor of seven) while changing the result that the application produces by less than 10%

    Characterizing Accuracy Trade-offs of EEG Applications on Embedded HMPs

    Full text link
    Electroencephalography (EEG) recordings are analyzed using battery-powered wearable devices to monitor brain activities and neurological disorders. These applications require long and continuous processing to generate feasible results. However, wearable devices are constrained with limited energy and computation resources, owing to their small sizes for practical use cases. Embedded heterogeneous multi-core platforms (HMPs) can provide better performance within limited energy budgets for EEG applications. Error resilience of the EEG application pipeline can be exploited further to maximize the performance and energy gains with HMPs. However, disciplined tuning of approximation on embedded HMPs requires a thorough exploration of the accuracy-performance-power trade-off space. In this work, we characterize the error resilience of three EEG applications, including Epileptic Seizure Detection, Sleep Stage Classification, and Stress Detection on the real-world embedded HMP test-bed of the Odroid XU3 platform. We present a combinatorial evaluation of power-performance-accuracy trade-offs of EEG applications at different approximation, power, and performance levels to provide insights into the disciplined tuning of approximation in EEG applications on embedded platforms.Comment: 7 pages, 10 figure

    Trading-off accuracy vs energy in multicore processors via evolutionary algorithms combining loop perforation and static analysis-based scheduling

    Get PDF
    This work addresses the problem of energy efficient scheduling and allocation of tasks in multicore environments, where the tasks can permit certain loss in accuracy of either final or intermediate results, while still providing proper functionality. Loss in accuracy is usually obtained with techniques that decrease computational load, which can result in significant energy savings. To this end, in this work we use the loop perforation technique that transforms loops to execute a subset of their iterations, and integrate it in our existing optimisation tool for energy efficient scheduling in multicore environments based on evolutionary algorithms and static analysis for estimating energy consumption of different schedules. The approach is designed for multicore XMOS chips, but it can be adapted to any multicore environment with slight changes. The experiments conducted on a case study in different scenarios show that our new scheduler enhanced with loop perforation improves the previous one, achieving significant energy savings (31 % on average) for acceptable levels of accuracy loss

    DT-MSOF Strategy and its Application to Reduce the Number of Operations in AHP

    Get PDF
    A computing strategy called Double Track"“Most Significant Operation First (DT-MSOF) is proposed. The goal of this strategy is to reduce computation time by reducing the number of operations that need to be executed, while maintaining a correct final result. Executions are conducted on a sequence of computing operations that have previously been sorted based on significance. Computation will only run until the result meets the needs of the user. In this study, the DT-MSOF strategy was used to modify the Analytic Hierarchy Process (AHP) algorithm into MD-AHP in order to reduce the number of operations that need to be done. The conventional AHP uses a run-to-completion approach, in which decisions can only be obtained after all of the operations have been completed. On the other hand, the calculations in MD-AHP are carried out iteratively only until the conditions are reached where a decision can be made. The simulation results show that MD-AHP can reduce the number of operations that need to be done to obtain the same results (decisions) as obtained by conventional AHP. It was also found that the more uneven the distribution of priority values, the more the number of operations could be reduced. 
    corecore