9,507 research outputs found
Time Domain Computation of a Nonlinear Nonlocal Cochlear Model with Applications to Multitone Interaction in Hearing
A nonlinear nonlocal cochlear model of the transmission line type is studied
in order to capture the multitone interactions and resulting tonal suppression
effects. The model can serve as a module for voice signal processing, it is a
one dimensional (in space) damped dispersive nonlinear PDE based on mechanics
and phenomenology of hearing. It describes the motion of basilar membrane (BM)
in the cochlea driven by input pressure waves. Both elastic damping and
selective longitudinal fluid damping are present. The former is nonlinear and
nonlocal in BM displacement, and plays a key role in capturing tonal
interactions. The latter is active only near the exit boundary (helicotrema),
and is built in to damp out the remaining long waves. The initial boundary
value problem is numerically solved with a semi-implicit second order finite
difference method. Solutions reach a multi-frequency quasi-steady state.
Numerical results are shown on two tone suppression from both high-frequency
and low-frequency sides, consistent with known behavior of two tone
suppression. Suppression effects among three tones are demonstrated by showing
how the response magnitudes of the fixed two tones are reduced as we vary the
third tone in frequency and amplitude. We observe qualitative agreement of our
model solutions with existing cat auditory neural data. The model is thus
simple and efficient as a processing tool for voice signals.Comment: 23 pages,7 figures; added reference
Learned Perceptual Image Enhancement
Learning a typical image enhancement pipeline involves minimization of a loss
function between enhanced and reference images. While L1 and L2 losses are
perhaps the most widely used functions for this purpose, they do not
necessarily lead to perceptually compelling results. In this paper, we show
that adding a learned no-reference image quality metric to the loss can
significantly improve enhancement operators. This metric is implemented using a
CNN (convolutional neural network) trained on a large-scale dataset labelled
with aesthetic preferences of human raters. This loss allows us to conveniently
perform back-propagation in our learning framework to simultaneously optimize
for similarity to a given ground truth reference and perceptual quality. This
perceptual loss is only used to train parameters of image processing operators,
and does not impose any extra complexity at inference time. Our experiments
demonstrate that this loss can be effective for tuning a variety of operators
such as local tone mapping and dehazing
A compiler approach to scalable concurrent program design
The programmer's most powerful tool for controlling complexity in program design is abstraction. We seek to use abstraction in the design of concurrent programs, so as to
separate design decisions concerned with decomposition, communication, synchronization, mapping, granularity, and load balancing. This paper describes programming and compiler techniques intended to facilitate this design strategy. The programming techniques are based on a core programming notation with two important properties: the ability to separate concurrent programming concerns, and extensibility with reusable programmer-defined
abstractions. The compiler techniques are based on a simple transformation system together with a set of compilation transformations and portable run-time support. The
transformation system allows programmer-defined abstractions to be defined as source-to-source transformations that convert abstractions into the core notation. The same
transformation system is used to apply compilation transformations that incrementally transform the core notation toward an abstract concurrent machine. This machine can be implemented on a variety of concurrent architectures using simple run-time support.
The transformation, compilation, and run-time system techniques have been implemented and are incorporated in a public-domain program development toolkit. This
toolkit operates on a wide variety of networked workstations, multicomputers, and shared-memory
multiprocessors. It includes a program transformer, concurrent compiler, syntax checker, debugger, performance analyzer, and execution animator. A variety of substantial
applications have been developed using the toolkit, in areas such as climate modeling and fluid dynamics
Chip level simulation of fault tolerant computers
Chip level modeling techniques, functional fault simulation, simulation software development, a more efficient, high level version of GSP, and a parallel architecture for functional simulation are discussed
A generic tool for interactive complex image editing
Plenty of complex image editing techniques require certain per-pixel property or magnitude to be known, e.g., simulating depth of field effects requires a depth map. This work presents an efficient interaction paradigm that approximates any per-pixel magnitude from a few user strokes by propagating the sparse user input to each pixel of the image. The propagation scheme is based on a linear least-squares system of equations which represents local and neighboring restrictions over superpixels. After each user input, the system responds immediately, propagating the values and applying the corresponding filter. Our interaction paradigm is generic, enabling image editing applications to run at interactive rates by changing just the image processing algorithm, but keeping our proposed propagation scheme. We illustrate this through three interactive applications: depth of field simulation, dehazing and tone mapping
Xampling: Signal Acquisition and Processing in Union of Subspaces
We introduce Xampling, a unified framework for signal acquisition and
processing of signals in a union of subspaces. The main functions of this
framework are two. Analog compression that narrows down the input bandwidth
prior to sampling with commercial devices. A nonlinear algorithm then detects
the input subspace prior to conventional signal processing. A representative
union model of spectrally-sparse signals serves as a test-case to study these
Xampling functions. We adopt three metrics for the choice of analog
compression: robustness to model mismatch, required hardware accuracy and
software complexities. We conduct a comprehensive comparison between two
sub-Nyquist acquisition strategies for spectrally-sparse signals, the random
demodulator and the modulated wideband converter (MWC), in terms of these
metrics and draw operative conclusions regarding the choice of analog
compression. We then address lowrate signal processing and develop an algorithm
for that purpose that enables convenient signal processing at sub-Nyquist rates
from samples obtained by the MWC. We conclude by showing that a variety of
other sampling approaches for different union classes fit nicely into our
framework.Comment: 16 pages, 9 figures, submitted to IEEE for possible publicatio
Model-Based Calibration of Filter Imperfections in the Random Demodulator for Compressive Sensing
The random demodulator is a recent compressive sensing architecture providing
efficient sub-Nyquist sampling of sparse band-limited signals. The compressive
sensing paradigm requires an accurate model of the analog front-end to enable
correct signal reconstruction in the digital domain. In practice, hardware
devices such as filters deviate from their desired design behavior due to
component variations. Existing reconstruction algorithms are sensitive to such
deviations, which fall into the more general category of measurement matrix
perturbations. This paper proposes a model-based technique that aims to
calibrate filter model mismatches to facilitate improved signal reconstruction
quality. The mismatch is considered to be an additive error in the discretized
impulse response. We identify the error by sampling a known calibrating signal,
enabling least-squares estimation of the impulse response error. The error
estimate and the known system model are used to calibrate the measurement
matrix. Numerical analysis demonstrates the effectiveness of the calibration
method even for highly deviating low-pass filter responses. The proposed method
performance is also compared to a state of the art method based on discrete
Fourier transform trigonometric interpolation.Comment: 10 pages, 8 figures, submitted to IEEE Transactions on Signal
Processin
- …