200 research outputs found
QR-RLS algorithm for error diffusion of color images
Printing color images on color printers and displaying them on computer monitors requires a significant reduction of physically distinct colors, which causes degradation in image quality. An efficient method to improve the display quality of a quantized image is error diffusion, which works by distributing the previous quantization errors to neighboring pixels, exploiting the eye's averaging of colors in the neighborhood of the point of interest. This creates the illusion of more colors. A new error diffusion method is presented in which the adaptive recursive least-squares (RLS) algorithm is used. This algorithm provides local optimization of the error diffusion filter along with smoothing of the filter coefficients in a neighborhood. To improve the performance, a diagonal scan is used in processing the image, (C) 2000 Society of Photo-Optical Instrumentation Engineers. [S0091-3286(00)00611-5]
Reliable and Efficient Parallel Processing Algorithms and Architectures for Modern Signal Processing
Least-squares (LS) estimations and spectral decomposition algorithms constitute the heart of modern signal processing and communication problems. Implementations of recursive LS and spectral decomposition algorithms onto parallel processing architectures such as systolic arrays with efficient fault-tolerant schemes are the major concerns of this dissertation. There are four major results in this dissertation. First, we propose the systolic block Householder transformation with application to the recursive least-squares minimization. It is successfully implemented on a systolic array with a two-level pipelined implementation at the vector level as well as at the word level. Second, a real-time algorithm-based concurrent error detection scheme based on the residual method is proposed for the QRD RLS systolic array. The fault diagnosis, order degraded reconfiguration, and performance analysis are also considered. Third, the dynamic range, stability, error detection capability under finite-precision implementation, order degraded performance, and residual estimation under faulty situations for the QRD RLS systolic array are studied in details. Finally, we propose the use of multi-phase systolic algorithms for spectral decomposition based on the QR algorithm. Two systolic architectures, one based on triangular array and another based on rectangular array, are presented for the multiphase operations with fault-tolerant considerations. Eigenvectors and singular vectors can be easily obtained by using the multi-pase operations. Performance issues are also considered
Distributed Recursive Least-Squares: Stability and Performance Analysis
The recursive least-squares (RLS) algorithm has well-documented merits for
reducing complexity and storage requirements, when it comes to online
estimation of stationary signals as well as for tracking slowly-varying
nonstationary processes. In this paper, a distributed recursive least-squares
(D-RLS) algorithm is developed for cooperative estimation using ad hoc wireless
sensor networks. Distributed iterations are obtained by minimizing a separable
reformulation of the exponentially-weighted least-squares cost, using the
alternating-minimization algorithm. Sensors carry out reduced-complexity tasks
locally, and exchange messages with one-hop neighbors to consent on the
network-wide estimates adaptively. A steady-state mean-square error (MSE)
performance analysis of D-RLS is conducted, by studying a stochastically-driven
`averaged' system that approximates the D-RLS dynamics asymptotically in time.
For sensor observations that are linearly related to the time-invariant
parameter vector sought, the simplifying independence setting assumptions
facilitate deriving accurate closed-form expressions for the MSE steady-state
values. The problems of mean- and MSE-sense stability of D-RLS are also
investigated, and easily-checkable sufficient conditions are derived under
which a steady-state is attained. Without resorting to diminishing step-sizes
which compromise the tracking ability of D-RLS, stability ensures that per
sensor estimates hover inside a ball of finite radius centered at the true
parameter vector, with high-probability, even when inter-sensor communication
links are noisy. Interestingly, computer simulations demonstrate that the
theoretical findings are accurate also in the pragmatic settings whereby
sensors acquire temporally-correlated data.Comment: 30 pages, 4 figures, submitted to IEEE Transactions on Signal
Processin
Compression algorithms for biomedical signals and nanopore sequencing data
The massive generation of biological digital information creates various computing
challenges such as its storage and transmission. For example, biomedical
signals, such as electroencephalograms (EEG), are recorded by multiple sensors over
long periods of time, resulting in large volumes of data. Another example is genome
DNA sequencing data, where the amount of data generated globally is seeing explosive
growth, leading to increasing needs for processing, storage, and transmission
resources. In this thesis we investigate the use of data compression techniques for
this problem, in two different scenarios where computational efficiency is crucial.
First we study the compression of multi-channel biomedical signals. We present
a new lossless data compressor for multi-channel signals, GSC, which achieves compression
performance similar to the state of the art, while being more computationally
efficient than other available alternatives. The compressor uses two novel
integer-based implementations of the predictive coding and expert advice schemes
for multi-channel signals. We also develop a version of GSC optimized for EEG
data. This version manages to significantly lower compression times while attaining
similar compression performance for that specic type of signal.
In a second scenario we study the compression of DNA sequencing data produced
by nanopore sequencing technologies. We present two novel lossless compression algorithms
specifically tailored to nanopore FASTQ files. ENANO is a reference-free
compressor, which mainly focuses on the compression of quality scores. It achieves
state of the art compression performance, while being fast and with low memory
consumption when compared to other popular FASTQ compression tools. On the
other hand, RENANO is a reference-based compressor, which improves on ENANO,
by providing a more efficient base call sequence compression component. For RENANO
two algorithms are introduced, corresponding to the following scenarios: a
reference genome is available without cost to both the compressor and the decompressor;
and the reference genome is available only on the compressor side, and a
compacted version of the reference is included in the compressed le. Both algorithms
of RENANO significantly improve the compression performance of ENANO,
with similar compression times, and higher memory requirements.La generación masiva de información digital biológica da lugar a múltiples desafíos informáticos, como su almacenamiento y transmisión. Por ejemplo, las señales biomédicas, como los electroencefalogramas (EEG), son generadas por múltiples sensores registrando medidas en simultaneo durante largos períodos de tiempo,
generando grandes volúmenes de datos. Otro ejemplo son los datos de secuenciación de ADN, en donde la cantidad de datos a nivel mundial esta creciendo de forma explosiva, lo que da lugar a una gran necesidad de recursos de procesamiento, almacenamiento y transmisión. En esta tesis investigamos como aplicar técnicas de compresión de datos para atacar este problema, en dos escenarios diferentes donde
la eficiencia computacional juega un rol importante.
Primero estudiamos la compresión de señales biomédicas multicanal. Comenzamos presentando un nuevo compresor de datos sin perdida para señales multicanal, GSC, que logra obtener niveles de compresión en el estado del arte y que al mismo tiempo es mas eficiente computacionalmente que otras alternativas disponibles. El compresor utiliza dos nuevas implementaciones de los esquemas de codificación predictiva
y de asesoramiento de expertos para señales multicanal, basadas en aritmética
de enteros. También presentamos una versión de GSC optimizada para datos de
EEG. Esta versión logra reducir significativamente los tiempos de compresión, sin
deteriorar significativamente los niveles de compresión para datos de EEG.
En un segundo escenario estudiamos la compresión de datos de secuenciación
de ADN generados por tecnologías de secuenciación por nanoporos. En este sentido,
presentamos dos nuevos algoritmos de compresión sin perdida, específicamente
diseñados para archivos FASTQ generados por tecnología de nanoporos. ENANO
es un compresor libre de referencia, enfocado principalmente en la compresión de
los valores de calidad de las bases. ENANO alcanza niveles de compresión en el
estado del arte, siendo a la vez mas eficiente computacionalmente que otras herramientas
populares de compresión de archivos FASTQ. Por otro lado, RENANO es
un compresor basado en la utilización de una referencia, que mejora el rendimiento
de ENANO, a partir de un nuevo esquema de compresión de las secuencias de bases.
Presentamos dos variantes de RENANO, correspondientes a los siguientes escenarios:
(i) se tiene a disposición un genoma de referencia, tanto del lado del compresor
como del descompresor, y (ii) se tiene un genoma de referencia disponible solo del
lado del compresor, y se incluye una versión compacta de la referencia en el archivo
comprimido. Ambas variantes de RENANO mejoran significativamente los niveles
compresión de ENANO, alcanzando tiempos de compresión similares y un mayor
consumo de memoria
Adjustable dynamic range for paper reduction schemes in large-scale MIMO-OFDM systems
In a multi-input-multi-output (MIMO) communication system there is a necessity to limit the power that the output antenna amplifiers can deliver. Their signal is a
combination of many independent channels, so the demanded amplitude can peak to many times the average value. The orthogonal frequency division multiplexing
(OFDM) system causes high peak signals to occur because many subcarrier components are added by an inverse discrete Fourier transformation process at the base station. This causes out-of-band spectral regrowth. If simple clipping of the input signal is used, there will be in-band distortions in the transmitted signals and the bit error rate will increase substantially.
This work presents a novel technique that reduces the peak-to-average power ratio (PAPR). It is a combination of two main stages, a variable clipping level and an
Adaptive Optimizer that takes advantage of the channel state information sent from all users in the cell.
Simulation results show that the proposed method achieves a better overall system performance than that of conventional peak reduction systems in terms of the symbol
error rate. As a result, the linear output of the power amplifiers can be minimized with a great saving in cost
Investigations on efficient adaptation algorithms
Ankara : Department of Electrical and Electronics Engineering and Institute of Engineering and Sciences, Bilkent Univ., 1995.Thesis (Master's) -- Bilkent University, 1995.Includes bibliographical references leaves 71-75.Efficient adaptation algorithms, which are intended to improve the performances
of the LMS and the RLS algorithms are introduced.
It is shown that nonlinear transformations of the input and the desired
signals by a softlimiter improve the convergence speed of the LMS algorithm
at no cost, with a small bias in the optimal filter coefficients. Also, the new
algorithm can be used to filter a-stable non-Gaussian processes for which the
conventional adaptive algorithms are useless.
In a second approach, a prewhitening filter is used to increase the convergence
speed of the LMS algorithm. It is shown that prewhitening does not
change the relation between the input and the desired signals provided that the
relation is a linear one. A low order adaptive prewhitening filter can provide
significant speed up in the convergence.
Finally, adaptive filtering algorithms running on roughly quantized signals
are proposed to decrease the number of multiplications in the LMS and the
RLS algorithms. Although, they require significantly less computations their
preformances are comparable to those of the conventional LMS and RLS algorithms.Belge, MuratM.S
QR-RLS algorithm for error diffusion of color images
Printing color images on color printers and displaying them on computer monitors requires a significant reduction of physically distinct colors, which causes degradation in image quality. An efficient method to improve the display quality of a quantized image is error diffusion, which works by distributing the previous quantization errors to neighboring pixels, exploiting the eye's averaging of colors in the neighborhood of the point of interest. This creates the illusion of more colors. A new error diffusion method is presented in which the adaptive recursive least-squares (RLS) algorithm is used. This algorithm provides local optimization of the error diffusion filter along with smoothing of the filter coefficients in a neighborhood. To improve the performance, a diagonal scan is used in processing the image
- …