3,547 research outputs found
Short Term Memory Capacity in Networks via the Restricted Isometry Property
Cortical networks are hypothesized to rely on transient network activity to
support short term memory (STM). In this paper we study the capacity of
randomly connected recurrent linear networks for performing STM when the input
signals are approximately sparse in some basis. We leverage results from
compressed sensing to provide rigorous non asymptotic recovery guarantees,
quantifying the impact of the input sparsity level, the input sparsity basis,
and the network characteristics on the system capacity. Our analysis
demonstrates that network memory capacities can scale superlinearly with the
number of nodes, and in some situations can achieve STM capacities that are
much larger than the network size. We provide perfect recovery guarantees for
finite sequences and recovery bounds for infinite sequences. The latter
analysis predicts that network STM systems may have an optimal recovery length
that balances errors due to omission and recall mistakes. Furthermore, we show
that the conditions yielding optimal STM capacity can be embodied in several
network topologies, including networks with sparse or dense connectivities.Comment: 50 pages, 5 figure
Exploiting Errors for Efficiency: A Survey from Circuits to Algorithms
When a computational task tolerates a relaxation of its specification or when
an algorithm tolerates the effects of noise in its execution, hardware,
programming languages, and system software can trade deviations from correct
behavior for lower resource usage. We present, for the first time, a synthesis
of research results on computing systems that only make as many errors as their
users can tolerate, from across the disciplines of computer aided design of
circuits, digital system design, computer architecture, programming languages,
operating systems, and information theory.
Rather than over-provisioning resources at each layer to avoid errors, it can
be more efficient to exploit the masking of errors occurring at one layer which
can prevent them from propagating to a higher layer. We survey tradeoffs for
individual layers of computing systems from the circuit level to the operating
system level and illustrate the potential benefits of end-to-end approaches
using two illustrative examples. To tie together the survey, we present a
consistent formalization of terminology, across the layers, which does not
significantly deviate from the terminology traditionally used by research
communities in their layer of focus.Comment: 35 page
Recommended from our members
Computing infrastructure issues in distributed communications systems : a survey of operating system transport system architectures
The performance of distributed applications (such as file transfer, remote login, tele-conferencing, full-motion video, and scientific visualization) is influenced by several factors that interact in complex ways. In particular, application performance is significantly affected both by communication infrastructure factors and computing infrastructure factors. Several communication infrastructure factors include channel speed, bit-error rate, and congestion at intermediate switching nodes. Computing infrastructure factors include (among other things) both protocol processing activities (such as connection management, flow control, error detection, and retransmission) and general operating system factors (such as memory latency, CPU speed, interrupt and context switching overhead, process architecture, and message buffering). Due to a several orders of magnitude increase in network channel speed and an increase in application diversity, performance bottlenecks are shifting from the network factors to the transport system factors.This paper defines an abstraction called an "Operating System Transport System Architecture" (OSTSA) that is used to classify the major components and services in the computing infrastructure. End-to-end network protocols such as TCP, TP4, VMTP, XTP, and Delta-t typically run on general-purpose computers, where they utilize various operating system resources such as processors, virtual memory, and network controllers. The OSTSA provides services that integrate these resources to support distributed applications running on local and wide area networks.A taxonomy is presented to evaluate OSTSAs in terms of their support for protocol processing activities. We use this taxonomy to compare and contrast five general-purpose commercial and experimental operating systems including System V UNIX, BSD UNIX, the x-kernel, Choices, and Xinu
Communication-Efficient Search for an Approximate Closest Lattice Point
We consider the problem of finding the closest lattice point to a vector in
n-dimensional Euclidean space when each component of the vector is available at
a distinct node in a network. Our objectives are (i) minimize the communication
cost and (ii) obtain the error probability. The approximate closest lattice
point considered here is the one obtained using the nearest-plane (Babai)
algorithm. Assuming a triangular special basis for the lattice, we develop
communication-efficient protocols for computing the approximate lattice point
and determine the communication cost for lattices of dimension n>1. Based on
available parameterizations of reduced bases, we determine the error
probability of the nearest plane algorithm for two dimensional lattices
analytically, and present a computational error estimation algorithm in three
dimensions. For dimensions 2 and 3, our results show that the error probability
increases with the packing density of the lattice
Model-based Hazard and Impact Analysis
Hazard and impact analysis is an indispensable task during the specification
and development of safety-critical technical systems, and particularly of their
software-intensive control parts. There is a lack of methods supporting an
effective (reusable, automated) and integrated (cross-disciplinary) way to
carry out such analyses.
This report was motivated by an industrial project whose goal was to survey
and propose methods and models for documentation and analysis of a system and
its environment to support hazard and impact analysis as an important task of
safety engineering and system development. We present and investigate three
perspectives of how to properly encode safety-relevant domain knowledge for
better reuse and automation, identify and assess all relevant hazards, as well
as pre-process this information and make it easily accessible for reuse in
other safety and systems engineering activities and, moreover, in similar
engineering projects
Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions
Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed—either explicitly or
implicitly—to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, robustness, and/or speed. These claims are supported by extensive numerical experiments and a detailed error analysis. The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the k dominant components of the singular value decomposition of an m × n matrix. (i) For a dense input matrix, randomized algorithms require O(mn log(k))
floating-point operations (flops) in contrast to O(mnk) for classical algorithms. (ii) For a sparse input matrix, the flop count matches classical Krylov subspace methods, but the randomized approach is more robust and can easily be reorganized to exploit multiprocessor architectures. (iii) For a matrix that is too large to fit in fast memory, the randomized techniques require only a constant number of passes over the data, as opposed to O(k) passes for classical algorithms. In fact, it is sometimes possible to perform matrix approximation with a single pass over the data
Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions
Low-rank matrix approximations, such as the truncated singular value
decomposition and the rank-revealing QR decomposition, play a central role in
data analysis and scientific computing. This work surveys and extends recent
research which demonstrates that randomization offers a powerful tool for
performing low-rank matrix approximation. These techniques exploit modern
computational architectures more fully than classical methods and open the
possibility of dealing with truly massive data sets.
This paper presents a modular framework for constructing randomized
algorithms that compute partial matrix decompositions. These methods use random
sampling to identify a subspace that captures most of the action of a matrix.
The input matrix is then compressed---either explicitly or implicitly---to this
subspace, and the reduced matrix is manipulated deterministically to obtain the
desired low-rank factorization. In many cases, this approach beats its
classical competitors in terms of accuracy, speed, and robustness. These claims
are supported by extensive numerical experiments and a detailed error analysis
Wavelet Theory
The wavelet is a powerful mathematical tool that plays an important role in science and technology. This book looks at some of the most creative and popular applications of wavelets including biomedical signal processing, image processing, communication signal processing, Internet of Things (IoT), acoustical signal processing, financial market data analysis, energy and power management, and COVID-19 pandemic measurements and calculations. The editor’s personal interest is the application of wavelet transform to identify time domain changes on signals and corresponding frequency components and in improving power amplifier behavior
A Domain Decomposition Approach to Implementing Fault Slip in Finite-Element Models of Quasi-static and Dynamic Crustal Deformation
We employ a domain decomposition approach with Lagrange multipliers to
implement fault slip in a finite-element code, PyLith, for use in both
quasi-static and dynamic crustal deformation applications. This integrated
approach to solving both quasi-static and dynamic simulations leverages common
finite-element data structures and implementations of various boundary
conditions, discretization schemes, and bulk and fault rheologies. We have
developed a custom preconditioner for the Lagrange multiplier portion of the
system of equations that provides excellent scalability with problem size
compared to conventional additive Schwarz methods. We demonstrate application
of this approach using benchmarks for both quasi-static viscoelastic
deformation and dynamic spontaneous rupture propagation that verify the
numerical implementation in PyLith.Comment: 14 pages, 15 figure
- …