485,430 research outputs found

    Constraining Attacker Capabilities Through Actuator Saturation

    Full text link
    For LTI control systems, we provide mathematical tools - in terms of Linear Matrix Inequalities - for computing outer ellipsoidal bounds on the reachable sets that attacks can induce in the system when they are subject to the physical limits of the actuators. Next, for a given set of dangerous states, states that (if reached) compromise the integrity or safe operation of the system, we provide tools for designing new artificial limits on the actuators (smaller than their physical bounds) such that the new ellipsoidal bounds (and thus the new reachable sets) are as large as possible (in terms of volume) while guaranteeing that the dangerous states are not reachable. This guarantees that the new bounds cut as little as possible from the original reachable set to minimize the loss of system performance. Computer simulations using a platoon of vehicles are presented to illustrate the performance of our tools

    Reservoir Computing Approach to Robust Computation using Unreliable Nanoscale Networks

    Full text link
    As we approach the physical limits of CMOS technology, advances in materials science and nanotechnology are making available a variety of unconventional computing substrates that can potentially replace top-down-designed silicon-based computing devices. Inherent stochasticity in the fabrication process and nanometer scale of these substrates inevitably lead to design variations, defects, faults, and noise in the resulting devices. A key challenge is how to harness such devices to perform robust computation. We propose reservoir computing as a solution. In reservoir computing, computation takes place by translating the dynamics of an excited medium, called a reservoir, into a desired output. This approach eliminates the need for external control and redundancy, and the programming is done using a closed-form regression problem on the output, which also allows concurrent programming using a single device. Using a theoretical model, we show that both regular and irregular reservoirs are intrinsically robust to structural noise as they perform computation

    Limits on Fundamental Limits to Computation

    Full text link
    An indispensable part of our lives, computing has also become essential to industries and governments. Steady improvements in computer hardware have been supported by periodic doubling of transistor densities in integrated circuits over the last fifty years. Such Moore scaling now requires increasingly heroic efforts, stimulating research in alternative hardware and stirring controversy. To help evaluate emerging technologies and enrich our understanding of integrated-circuit scaling, we review fundamental limits to computation: in manufacturing, energy, physical space, design and verification effort, and algorithms. To outline what is achievable in principle and in practice, we recall how some limits were circumvented, compare loose and tight limits. We also point out that engineering difficulties encountered by emerging technologies may indicate yet-unknown limits.Comment: 15 pages, 4 figures, 1 tabl

    Universal Limits on Computation

    Full text link
    The physical limits to computation have been under active scrutiny over the past decade or two, as theoretical investigations of the possible impact of quantum mechanical processes on computing have begun to make contact with realizable experimental configurations. We demonstrate here that the observed acceleration of the Universe can produce a universal limit on the total amount of information that can be stored and processed in the future, putting an ultimate limit on future technology for any civilization, including a time-limit on Moore's Law. The limits we derive are stringent, and include the possibilities that the computing performed is either distributed or local. A careful consideration of the effect of horizons on information processing is necessary for this analysis, which suggests that the total amount of information that can be processed by any observer is significantly less than the Hawking-Bekenstein entropy associated with the existence of an event horizon in an accelerating universe.Comment: 3 pages including eps figure, submitted to Phys. Rev. Lett; several typos corrected, several references added, and a short discussion of w <-1 adde

    On the Interpretation of Energy as the Rate of Quantum Computation

    Full text link
    Over the last few decades, developments in the physical limits of computing and quantum computing have increasingly taught us that it can be helpful to think about physics itself in computational terms. For example, work over the last decade has shown that the energy of a quantum system limits the rate at which it can perform significant computational operations, and suggests that we might validly interpret energy as in fact being the speed at which a physical system is "computing," in some appropriate sense of the word. In this paper, we explore the precise nature of this connection. Elementary results in quantum theory show that the Hamiltonian energy of any quantum system corresponds exactly to the angular velocity of state-vector rotation (defined in a certain natural way) in Hilbert space, and also to the rate at which the state-vector's components (in any basis) sweep out area in the complex plane. The total angle traversed (or area swept out) corresponds to the action of the Hamiltonian operator along the trajectory, and we can also consider it to be a measure of the "amount of computational effort exerted" by the system, or effort for short. For any specific quantum or classical computational operation, we can (at least in principle) calculate its difficulty, defined as the minimum effort required to perform that operation on a worst-case input state, and this in turn determines the minimum time required for quantum systems to carry out that operation on worst-case input states of a given energy. As examples, we calculate the difficulty of some basic 1-bit and n-bit quantum and classical operations in an simple unconstrained scenario.Comment: Revised to address reviewer comments. Corrects an error relating to time-ordering, adds some additional references and discussion, shortened in a few places. Figures now incorporated into tex

    A Review on Software Architectures for Heterogeneous Platforms

    Full text link
    The increasing demands for computing performance have been a reality regardless of the requirements for smaller and more energy efficient devices. Throughout the years, the strategy adopted by industry was to increase the robustness of a single processor by increasing its clock frequency and mounting more transistors so more calculations could be executed. However, it is known that the physical limits of such processors are being reached, and one way to fulfill such increasing computing demands has been to adopt a strategy based on heterogeneous computing, i.e., using a heterogeneous platform containing more than one type of processor. This way, different types of tasks can be executed by processors that are specialized in them. Heterogeneous computing, however, poses a number of challenges to software engineering, especially in the architecture and deployment phases. In this paper, we conduct an empirical study that aims at discovering the state-of-the-art in software architecture for heterogeneous computing, with focus on deployment. We conduct a systematic mapping study that retrieved 28 studies, which were critically assessed to obtain an overview of the research field. We identified gaps and trends that can be used by both researchers and practitioners as guides to further investigate the topic

    Hierarchical Composition of Memristive Networks for Real-Time Computing

    Get PDF
    Advances in materials science have led to physical instantiations of self-assembled networks of memristive devices and demonstrations of their computational capability through reservoir computing. Reservoir computing is an approach that takes advantage of collective system dynamics for real-time computing. A dynamical system, called a reservoir, is excited with a time-varying signal and observations of its states are used to reconstruct a desired output signal. However, such a monolithic assembly limits the computational power due to signal interdependency and the resulting correlated readouts. Here, we introduce an approach that hierarchically composes a set of interconnected memristive networks into a larger reservoir. We use signal amplification and restoration to reduce reservoir state correlation, which improves the feature extraction from the input signals. Using the same number of output signals, such a hierarchical composition of heterogeneous small networks outperforms monolithic memristive networks by at least 20% on waveform generation tasks. On the NARMA-10 task, we reduce the error by up to a factor of 2 compared to homogeneous reservoirs with sigmoidal neurons, whereas single memristive networks are unable to produce the correct result. Hierarchical composition is key for solving more complex tasks with such novel nano-scale hardware
    corecore