509,726 research outputs found

    Study and development of innovative strategies for energy-efficient cross-layer design of digital VLSI systems based on Approximate Computing

    Get PDF
    The increasing demand on requirements for high performance and energy efficiency in modern digital systems has led to the research of new design approaches that are able to go beyond the established energy-performance tradeoff. Looking at scientific literature, the Approximate Computing paradigm has been particularly prolific. Many applications in the domain of signal processing, multimedia, computer vision, machine learning are known to be particularly resilient to errors occurring on their input data and during computation, producing outputs that, although degraded, are still largely acceptable from the point of view of quality. The Approximate Computing design paradigm leverages the characteristics of this group of applications to develop circuits, architectures, algorithms that, by relaxing design constraints, perform their computations in an approximate or inexact manner reducing energy consumption. This PhD research aims to explore the design of hardware/software architectures based on Approximate Computing techniques, filling the gap in literature regarding effective applicability and deriving a systematic methodology to characterize its benefits and tradeoffs. The main contributions of this work are: -the introduction of approximate memory management inside the Linux OS, allowing dynamic allocation and de-allocation of approximate memory at user level, as for normal exact memory; - the development of an emulation environment for platforms with approximate memory units, where faults are injected during the simulation based on models that reproduce the effects on memory cells of circuital and architectural techniques for approximate memories; -the implementation and analysis of the impact of approximate memory hardware on real applications: the H.264 video encoder, internally modified to allocate selected data buffers in approximate memory, and signal processing applications (digital filter) using approximate memory for input/output buffers and tap registers; -the development of a fully reconfigurable and combinatorial floating point unit, which can work with reduced precision formats

    GLM permutation - nonparametric inference for arbitrary general linear models

    Get PDF
    Introduction: Permutation methods are finding growing use in neuroimag- ing data analyses (e.g. randomise in FSL, SnPM in SPM, XBAMM/BAMM/CAMBA, etc). These methods provide ex- act control of false positives, make only weak assumptions, and allow nonstandard types of statistics (e.g. smoothed variance t- test). With fast and inexpensive computing, there would seem few reasons not to use nonparametric methods. A significant limitation of these methods, however, is the lack of flexibility with respect to the experimental design and nuisance variables. Each specific design dictates the type of exchange- ability of null data, and hence how to permute. Nuisance effects (e.g. age) render data non-exchangeable even when the effect of interest is null. Hence, even something as simple as ANCOVA has no exact permutation test. Recently there has been an active literature on approximate– but accurate–permutation tests for 2-variable regression, one effect of interest, one nuisance (see review by Anderson & Robinson [1]). Here we extend and evaluate these methods for use with an arbitrary General Linear Model (GLM)

    MDP-Based Scheduling Design for Mobile-Edge Computing Systems with Random User Arrival

    Full text link
    In this paper, we investigate the scheduling design of a mobile-edge computing (MEC) system, where the random arrival of mobile devices with computation tasks in both spatial and temporal domains is considered. The binary computation offloading model is adopted. Every task is indivisible and can be computed at either the mobile device or the MEC server. We formulate the optimization of task offloading decision, uplink transmission device selection and power allocation in all the frames as an infinite-horizon Markov decision process (MDP). Due to the uncertainty in device number and location, conventional approximate MDP approaches to addressing the curse of dimensionality cannot be applied. A novel low-complexity sub-optimal solution framework is then proposed. We first introduce a baseline scheduling policy, whose value function can be derived analytically. Then, one-step policy iteration is adopted to obtain a sub-optimal scheduling policy whose performance can be bounded analytically. Simulation results show that the gain of the sub-optimal policy over various benchmarks is significant.Comment: 6 pages, 3 figures; accepted by Globecom 2019; title changed to better describe the work, introduction condensed, typos correcte

    Energy-Efficient Approximate Least Squares Accelerator:A Case Study of Radio Astronomy Calibration Processing

    Get PDF
    Approximate computing allows the introduction of inaccuracy in the computation for cost savings, such as energy consumption, chip-area, and latency. Targeting energy efficiency, approximate designs for multipliers, adders, and multiply-accumulate (MAC) have been extensively investigated in the past decade. However, accelerator designs for relatively bigger architectures have been of less attention yet. The Least Squares (LS) algorithm is widely used in digital signal processing applications, e.g., image reconstruction. This work proposes a novel LS accelerator design based on a heterogeneous architecture, where the heterogeneity is introduced using accurate and approximate processing cores. We have considered a case study of radio astronomy calibration processing that employs a complex-input iterative LS algorithm. Our proposed methodology exploits the intrinsic error-resilience of the aforesaid algorithm, where initial iterations are processed on approximate modules while the later ones on accurate modules. Our energy-quality experiments have shown up to 24% of energy savings as compared to an accurate (optimized) counterpart for biased designs and up to 29% energy savings when unbiasing is introduced. The proposed LS accelerator design does not increase the number of iterations and provides sufficient precision to converge to an acceptable solution

    Eigenvalues and eigenfunctions of the Laplacian via inverse iteration with shift

    Full text link
    In this paper we present an iterative method, inspired by the inverse iteration with shift technique of finite linear algebra, designed to find the eigenvalues and eigenfunctions of the Laplacian with homogeneous Dirichlet boundary condition for arbitrary bounded domains Ω⊂RN\Omega\subset R^{N}. This method, which has a direct functional analysis approach, does not approximate the eigenvalues of the Laplacian as those of a finite linear operator. It is based on the uniform convergence away from nodal surfaces and can produce a simple and fast algorithm for computing the eigenvalues with minimal computational requirements, instead of using the ubiquitous Rayleigh quotient of finite linear algebra. Also, an alternative expression for the Rayleigh quotient in the associated infinite dimensional Sobolev space which avoids the integration of gradients is introduced and shown to be more efficient. The method can also be used in order to produce the spectral decomposition of any given function u∈L2(Ω)u\in L^{2}(\Omega).Comment: In this version the numerical tests in Section 6 were considerably improved and the Section 5 entitled "Normalization at each step" was introduced. Moreover, minor adjustments in the Section 1 (Introduction) and in the Section 7 (Fi nal Comments) were made. Breno Loureiro Giacchini was added as coautho

    Computing better approximate pure Nash equilibria in cut games via semidefinite programming

    Full text link
    Cut games are among the most fundamental strategic games in algorithmic game theory. It is well-known that computing an exact pure Nash equilibrium in these games is PLS-hard, so research has focused on computing approximate equilibria. We present a polynomial-time algorithm that computes 2.73712.7371-approximate pure Nash equilibria in cut games. This is the first improvement to the previously best-known bound of 33, due to the work of Bhalgat, Chakraborty, and Khanna from EC 2010. Our algorithm is based on a general recipe proposed by Caragiannis, Fanelli, Gravin, and Skopalik from FOCS 2011 and applied on several potential games since then. The first novelty of our work is the introduction of a phase that can identify subsets of players who can simultaneously improve their utilities considerably. This is done via semidefinite programming and randomized rounding. In particular, a negative objective value to the semidefinite program guarantees that no such considerable improvement is possible for a given set of players. Otherwise, randomized rounding of the SDP solution is used to identify a set of players who can simultaneously improve their strategies considerably and allows the algorithm to make progress. The way rounding is performed is another important novelty of our work. Here, we exploit an idea that dates back to a paper by Feige and Goemans from 1995, but we take it to an extreme that has not been analyzed before
    • …
    corecore