17,794 research outputs found

    Limits on Fundamental Limits to Computation

    Full text link
    An indispensable part of our lives, computing has also become essential to industries and governments. Steady improvements in computer hardware have been supported by periodic doubling of transistor densities in integrated circuits over the last fifty years. Such Moore scaling now requires increasingly heroic efforts, stimulating research in alternative hardware and stirring controversy. To help evaluate emerging technologies and enrich our understanding of integrated-circuit scaling, we review fundamental limits to computation: in manufacturing, energy, physical space, design and verification effort, and algorithms. To outline what is achievable in principle and in practice, we recall how some limits were circumvented, compare loose and tight limits. We also point out that engineering difficulties encountered by emerging technologies may indicate yet-unknown limits.Comment: 15 pages, 4 figures, 1 tabl

    Privacy Leakages in Approximate Adders

    Full text link
    Approximate computing has recently emerged as a promising method to meet the low power requirements of digital designs. The erroneous outputs produced in approximate computing can be partially a function of each chip's process variation. We show that, in such schemes, the erroneous outputs produced on each chip instance can reveal the identity of the chip that performed the computation, possibly jeopardizing user privacy. In this work, we perform simulation experiments on 32-bit Ripple Carry Adders, Carry Lookahead Adders, and Han-Carlson Adders running at over-scaled operating points. Our results show that identification is possible, we contrast the identifiability of each type of adder, and we quantify how success of identification varies with the extent of over-scaling and noise. Our results are the first to show that approximate digital computations may compromise privacy. Designers of future approximate computing systems should be aware of the possible privacy leakages and decide whether mitigation is warranted in their application.Comment: 2017 IEEE International Symposium on Circuits and Systems (ISCAS

    Measurement-Induced Phase Transitions in the Dynamics of Entanglement

    Full text link
    We define dynamical universality classes for many-body systems whose unitary evolution is punctuated by projective measurements. In cases where such measurements occur randomly at a finite rate pp for each degree of freedom, we show that the system has two dynamical phases: `entangling' and `disentangling'. The former occurs for pp smaller than a critical rate pcp_c, and is characterized by volume-law entanglement in the steady-state and `ballistic' entanglement growth after a quench. By contrast, for p>pcp > p_c the system can sustain only area-law entanglement. At p=pcp = p_c the steady state is scale-invariant and, in 1+1D, the entanglement grows logarithmically after a quench. To obtain a simple heuristic picture for the entangling-disentangling transition, we first construct a toy model that describes the zeroth R\'{e}nyi entropy in discrete time. We solve this model exactly by mapping it to an optimization problem in classical percolation. The generic entangling-disentangling transition can be diagnosed using the von Neumann entropy and higher R\'{e}nyi entropies, and it shares many qualitative features with the toy problem. We study the generic transition numerically in quantum spin chains, and show that the phenomenology of the two phases is similar to that of the toy model, but with distinct `quantum' critical exponents, which we calculate numerically in 1+11+1D. We examine two different cases for the unitary dynamics: Floquet dynamics for a nonintegrable Ising model, and random circuit dynamics. We obtain compatible universal properties in each case, indicating that the entangling-disentangling phase transition is generic for projectively measured many-body systems. We discuss the significance of this transition for numerical calculations of quantum observables in many-body systems.Comment: 17+4 pages, 16 figures; updated discussion and results for mutual information; graphics error fixe

    Asymptotic behavior of memristive circuits

    Full text link
    The interest in memristors has risen due to their possible application both as memory units and as computational devices in combination with CMOS. This is in part due to their nonlinear dynamics, and a strong dependence on the circuit topology. We provide evidence that also purely memristive circuits can be employed for computational purposes. In the present paper we show that a polynomial Lyapunov function in the memory parameters exists for the case of DC controlled memristors. Such Lyapunov function can be asymptotically approximated with binary variables, and mapped to quadratic combinatorial optimization problems. This also shows a direct parallel between memristive circuits and the Hopfield-Little model. In the case of Erdos-Renyi random circuits, we show numerically that the distribution of the matrix elements of the projectors can be roughly approximated with a Gaussian distribution, and that it scales with the inverse square root of the number of elements. This provides an approximated but direct connection with the physics of disordered system and, in particular, of mean field spin glasses. Using this and the fact that the interaction is controlled by a projector operator on the loop space of the circuit. We estimate the number of stationary points of the approximate Lyapunov function and provide a scaling formula as an upper bound in terms of the circuit topology only.Comment: 20 pages, 8 figures; proofs corrected, figures changed; results substantially unchanged; to appear in Entrop

    Detection of atrial fibrillation episodes in long-term heart rhythm signals using a support vector machine

    Get PDF
    Atrial fibrillation (AF) is a serious heart arrhythmia leading to a significant increase of the risk for occurrence of ischemic stroke. Clinically, the AF episode is recognized in an electrocardiogram. However, detection of asymptomatic AF, which requires a long-term monitoring, is more efficient when based on irregularity of beat-to-beat intervals estimated by the heart rate (HR) features. Automated classification of heartbeats into AF and non-AF by means of the Lagrangian Support Vector Machine has been proposed. The classifier input vector consisted of sixteen features, including four coefficients very sensitive to beat-to-beat heart changes, taken from the fetal heart rate analysis in perinatal medicine. Effectiveness of the proposed classifier has been verified on the MIT-BIH Atrial Fibrillation Database. Designing of the LSVM classifier using very large number of feature vectors requires extreme computational efforts. Therefore, an original approach has been proposed to determine a training set of the smallest possible size that still would guarantee a high quality of AF detection. It enables to obtain satisfactory results using only 1.39% of all heartbeats as the training data. Post-processing stage based on aggregation of classified heartbeats into AF episodes has been applied to provide more reliable information on patient risk. Results obtained during the testing phase showed the sensitivity of 98.94%, positive predictive value of 98.39%, and classification accuracy of 98.86%.Web of Science203art. no. 76
    corecore