71,171 research outputs found
Exploiting Errors for Efficiency: A Survey from Circuits to Algorithms
When a computational task tolerates a relaxation of its specification or when
an algorithm tolerates the effects of noise in its execution, hardware,
programming languages, and system software can trade deviations from correct
behavior for lower resource usage. We present, for the first time, a synthesis
of research results on computing systems that only make as many errors as their
users can tolerate, from across the disciplines of computer aided design of
circuits, digital system design, computer architecture, programming languages,
operating systems, and information theory.
Rather than over-provisioning resources at each layer to avoid errors, it can
be more efficient to exploit the masking of errors occurring at one layer which
can prevent them from propagating to a higher layer. We survey tradeoffs for
individual layers of computing systems from the circuit level to the operating
system level and illustrate the potential benefits of end-to-end approaches
using two illustrative examples. To tie together the survey, we present a
consistent formalization of terminology, across the layers, which does not
significantly deviate from the terminology traditionally used by research
communities in their layer of focus.Comment: 35 page
Non-negative Matrix Factorization with Linear Constraints for Single-Channel Speech Enhancement
This paper investigates a non-negative matrix factorization (NMF)-based
approach to the semi-supervised single-channel speech enhancement problem where
only non-stationary additive noise signals are given. The proposed method
relies on sinusoidal model of speech production which is integrated inside NMF
framework using linear constraints on dictionary atoms. This method is further
developed to regularize harmonic amplitudes. Simple multiplicative algorithms
are presented. The experimental evaluation was made on TIMIT corpus mixed with
various types of noise. It has been shown that the proposed method outperforms
some of the state-of-the-art noise suppression techniques in terms of
signal-to-noise ratio
On Sommerfeld precursor in a Lorentz medium
A one-dimensional electromagnetic problem of Sommerfeld precursor evolution,
resulting from a finite rise-time signal excitation in a dispersive Lorentz
medium is considered. The effect of the initial signal rate of growth as well
as of the medium dumping on the precursor shape and its magnitude is discussed.
The analysis applied is based on an approach employing uniform asymptotic
expansions. In addition, new approximate formulas are given for the location of
the distant saddle points which affect local frequency and dumping of the
precursor. The results obtained are illustrated numerically and compared with
the results known from the literature.Comment: 19 pages, 8 figure
Information-Theoretic Limits for the Matrix Tensor Product
This paper studies a high-dimensional inference problem involving the matrix
tensor product of random matrices. This problem generalizes a number of
contemporary data science problems including the spiked matrix models used in
sparse principal component analysis and covariance estimation and the
stochastic block model used in network analysis. The main results are
single-letter formulas (i.e., analytical expressions that can be approximated
numerically) for the mutual information and the minimum mean-squared error
(MMSE) in the Bayes optimal setting where the distributions of all random
quantities are known. We provide non-asymptotic bounds and show that our
formulas describe exactly the leading order terms in the mutual information and
MMSE in the high-dimensional regime where the number of rows and number of
columns scale with for some .
On the technical side, this paper introduces some new techniques for the
analysis of high-dimensional matrix-valued signals. Specific contributions
include a novel extension of the adaptive interpolation method that uses
order-preserving positive semidefinite interpolation paths, and a variance
inequality between the overlap and the free energy that is based on
continuous-time I-MMSE relations
Collider phenomenology of Hidden Valley mediators of spin 0 or 1/2 with semivisible jets
Many models of Beyond the Standard Model physics contain particles that are
charged under both Standard Model and Hidden Valley gauge groups, yet very
little effort has been put into establishing their experimental signatures. We
provide a general overview of the collider phenomenology of spin 0 or 1/2
mediators with non-trivial gauge numbers under both the Standard Model and a
single new confining group. Due to the possibility of many unconventional
signatures, the focus is on direct production with semivisible jets. For the
mediators to be able to decay, a global symmetry must be broken. This is
best done by introducing a set of operators explicitly violating this symmetry.
We find that there is only a finite number of such renormalizable operators and
that the phenomenology can be classified into five distinct categories. We show
that large regions of the parameter space are already excluded, while others
are unconstrained by current search strategies. We also discuss how searches
could be modified to better probe these unconstrained regions by exploiting
special properties of semivisible jets.Comment: 40 pages, 11 figures, published versio
Measurement of dynamic interferometer baseline perturbations by means of wavelength-scanning interferometry
A novel approach for measuring fast oscillations of an absolute value of
interferometer optical path difference (OPD) has been developed. The principles
of frequency-scanning interferometry are utilized for registration of the
interferometer spectral function, from which the OPD is calculated. The
proposed approach enables one to capture the absolute baseline variations at
frequencies much higher than the spectral acquisition rate. Despite the
conventional approaches, associating a single baseline indication to the
registered spectrum, in the proposed method a specially developed demodulation
procedure is applied to the spectrum. This provides an ability to capture the
baseline variations which took place during the spectrum acquisition. An
analytical model describing the limitations on the parameters of the possibly
registered baseline variations are formulated. The experimental verification of
the proposed approach and the developed model has been performed.Comment: 11 pages, 4 figure
Is "Compressed Sensing" compressive? Can it beat the Nyquist Sampling Approach?
Data compression capability of "Compressed sensing (sampling)" in signal
discretization is numerically evaluated and found to be far from the
theoretical upper bound defined by signal sparsity. It is shown that, for the
cases when ordinary sampling with subsequent data compression is prohibitive,
there is at least one more efficient, in terms of data compression capability,
and more simple and intuitive alternative to Compressed sensing: random sparse
sampling and restoration of image band-limited approximations based on energy
compaction capability of transforms. It is also shown that assertions that
"Compressed sensing" can beat the Nyquist sampling approach are rooted in
misinterpretation of the sampling theory.Comment: 5 pages, 4 figure
Towards Verification of Uncertain Cyber-Physical Systems
Cyber-Physical Systems (CPS) pose new challenges to verification and
validation that go beyond the proof of functional correctness based on
high-level models. Particular challenges are, in particular for formal methods,
its heterogeneity and scalability. For numerical simulation, uncertain behavior
can hardly be covered in a comprehensive way which motivates the use of
symbolic methods.
The paper describes an approach for symbolic simulation-based verification of
CPS with uncertainties. We define a symbolic model and representation of
uncertain computations: Affine Arithmetic Decision Diagrams. Then we integrate
this approach in the SystemC AMS simulator that supports simulation in
different models of computation. We demonstrate the approach by analyzing a
water-level monitor with uncertainties, self-diagnosis, and error-reactions.Comment: In Proceedings SNR 2017, arXiv:1704.0242
Tensor B-Spline Numerical Methods for PDEs: a High-Performance Alternative to FEM
Tensor B-spline methods are a high-performance alternative to solve partial
differential equations (PDEs). This paper gives an overview on the principles
of Tensor B-spline methodology, shows their use and analyzes their performance
in application examples, and discusses its merits. Tensors preserve the
dimensional structure of a discretized PDE, which makes it possible to develop
highly efficient computational solvers. B-splines provide high-quality
approximations, lead to a sparse structure of the system operator represented
by shift-invariant separable kernels in the domain, and are mesh-free by
construction. Further, high-order bases can easily be constructed from
B-splines. In order to demonstrate the advantageous numerical performance of
tensor B-spline methods, we studied the solution of a large-scale heat-equation
problem (consisting of roughly 0.8 billion nodes!) on a heterogeneous
workstation consisting of multi-core CPU and GPUs. Our experimental results
nicely confirm the excellent numerical approximation properties of tensor
B-splines, and their unique combination of high computational efficiency and
low memory consumption, thereby showing huge improvements over standard
finite-element methods (FEM)
Fundamental Limits of Electromagnetic Axion and Hidden-Photon Dark Matter Searches: Part I - The Quantum Limit
We discuss fundamental limits of electromagnetic searches for axion and
hidden-photon dark matter. We begin by showing the signal-to-noise advantage of
scanned resonant detectors over purely resistive broadband detectors. We
discuss why the optimal detector circuit must be driven by the dark-matter
signal through a reactance; examples of such detectors include single-pole
resonators. We develop a framework to optimize dark matter searches using prior
information about the dark matter signal (e.g. astrophysical and
direct-detection constraints or preferred search ranges). We define integrated
sensitivity as a figure of merit in comparing searches over a wide frequency
range and show that the Bode-Fano criterion sets a limit on integrated
sensitivity. We show that when resonator thermal noise dominates amplifier
noise, substantial sensitivity is available away from the resonator bandwidth.
Additionally, we show that the optimized single-pole resonator is close to the
Bode-Fano limit, establishing the resonator as a near-ideal method for
single-moded dark-matter detection. We optimize time allocation in a scanned
resonator using priors and derive quantum limits on resonant search
sensitivity. We show that, in contrast to some previous work, resonant searches
benefit from quality factors above one million, the characteristic quality
factor of the dark-matter signal. We also show that the optimized resonator is
superior, in signal-to-noise ratio, to the optimized reactive broadband
detector at all frequencies at which a resonator may practically be made. At
low frequencies, the application of our optimization may enhance scan rates by
a few orders of magnitude. Finally, we discuss prospects for evading the
quantum limits using backaction evasion, photon counting, squeezing and other
nonclassical approaches, as a prelude to Part II.Comment: Extended discussion on coupling to dark matter signal in Section III,
the role of priors in scan time allocation in Section V B, and resonant vs.
broadband searches in Appendix
- …